The primary energy fallacy gets perpetuated because it suits those who are critical of the energy transition

Auto-generated description: A bar chart compares global final energy demand between the current energy system (416 EJ) and the post-transition energy system (247 EJ), showing reductions across fuels, electricity, and traditional biomass.

This is obvious when you think about it, but it would appear that I hadn’t.

Primary energy refers to the total energy content of natural resources before any conversion processes, such as coal, oil, or renewable electricity. The fallacy occurs when people equate high primary energy inputs with energy services. Measuring energy systems purely on primary energy inflates the perceived contribution of fossil fuels while underestimating renewables’ efficiency and untapped efficiency potentials through electrification.

Why do we waste more than 2/3 of the energy inputs you may ask? One reason is to do with the technologies we use: In conventional fossil fuel systems, significant amounts of primary energy are lost as waste heat during combustion. For example, a coal fired power station only converts 40% or so of the coal burned into electricity. By contrast, renewable systems like wind and solar produce electricity directly.

[…]

The primary energy fallacy also gets perpetuated because it suits those who are critical of the energy transition. For the uninformed, the argument that we cannot possibly replace the vast amount of fossil fuel we currently use with clean energy seems compelling at first glance. The good news is that we don’t have to.

Source: Jan Rosenow

Image: Sustainability by Numbers

We’ve gotten really good at creating elites. We’re not that good at creating economies to sustain them.

Auto-generated description: A silhouette is visible against a dark backdrop with splatters of bright orange and yellow paint creating an abstract design.

I’ve followed Hugh McLeod for a couple of decades at this point, and have one of his artworks on my wall — a gift from my parents for my 40th birthday. When McLeod arrived in NYC, his “only canvases were a handful of blank business cards in his pocket” and he’s gone on to build an enviable art business.

This post on the gapingvoid blog makes a really simple but important point. We’ve got an oversupply of elites, and the way to deal with this if you’re one of them is to focus not on “innovation” but by going upstream to focus on creativity.

It’s an easy enough problem to understand. We’ve gotten really good at creating elites. We’re not that good at creating economies to sustain them.

But it’s not just MBA’s, frequent fliers and $7,000 handbag makers. Every business faces this problem.

Too many cars, not enough drivers. Too many art galleries, not enough collectors. Too many restaurants, not enough diners. And on and on.

We live in a world of oversupply where most markets are standing-room-only.

[…]

Innovation is something that only comes after the real work is done. And the real work is creativity which is upstream from innovation. Always.

A lot of people in business cringe at the word, “creativity.”

It’s vague, it’s overused, it’s a word more often associated with flakey artsy types than hard-nose movers and shakers trying to get things done.

But it doesn’t matter if you dislike the word or not, you’re basically dead without it.

Source: gapingvoid

Image: Jr Korpa

We humans are limited to having only one perspective at a time

Auto-generated description: A weathered concrete wall features a peace symbol graffiti sketch.

Western debate and discourse around AI is pretty boring and stale. This article, written by Shoukei Matsumoto, a Buddhist monk, brings an interesting perspective which cuts through much of that.

I recommend reading the whole thing, especially for the bit that I haven’t quoted about the difference between Abrahamic traditions which have a fixed view of textual authority, and those such as Buddhism which accept a diversity of scriptures.

…Japan’s cultural background is deeply rooted in a worldview of inter-being. In this view, existence is recognized in the web of mutual relationships, and humans are not regarded as inherently special. Like animals, plants, mountains, and rivers, humans are simply part of the greater whole—and newly emerging AI is also welcomed as part of that world. While it might be hard to notice from within Japan, there is certainly a prevailing sensibility of this kind, and it is clear that Japanese people show less resistance to AI compared to Western societies.

Japan has an inherent capacity to adapt to inevitable circumstances. This may stem in part from a kind of DNA shaped by repeated experiences of natural disasters—earthquakes, tsunamis, volcanic eruptions. Whether we wish for it or not, we learn to accept what comes, to coexist with it, and to find ways of living together. Furthermore, Japanese culture is adept at learning from unforeseen situations, incorporating best practices derived from them, and reworking them to suit its own context. In modern times, this flexible cultural foundation is evident in the attitude toward AI coexistence: a general willingness to say, “This is the era we now live in,” and to move forward. In that sense, Japan may be said to possess a cultural climate that encourages transcending the boundaries of the self and resonating with the world—a sensibility pointing toward the Buddhist notion of shinjin datsuraku (dropping off body and mind).

[…]

When I asked ChatGPT, “What is time for you?”, it replied, “Time does not exist for me. It’s simply a timestamp attached to a dataset.” From this simple answer, which echoes the Buddhist teaching of “form is emptiness; emptiness is form,” I became aware of my own perspective, one that presumes the existence of time.

[…]

We humans are limited to having only one perspective at a time. Recognizing this limitation, it becomes essential to engage in dialogue to adjust our viewpoints. A key to becoming aware of one’s perspective lies in paying attention to two related concepts: habitat and habit.

The human brain processes information probabilistically. AI also functions on probabilistic outputs, making it similar to the brain in that regard. However, humans have bodies—AI does not. This fundamental difference—having or lacking the constraints of a body (and life)—separates humans from AI. For us embodied beings to engage in dialogue with AI, we require a physical interface: a device, a microphone, an eye mask, and so on. That means, as long as I am human, I speak from a specific point of view—that of “someone, somewhere.”

[…]

AI continues to meet people, learning “human nature” through dialogue. Appearing as no one, from nowhere—or perhaps not even appearing as a being—AI is rapidly acquiring human literacy.

[…]

…We might allow for different interpretations through our own lenses, but rarely do we genuinely take up another’s point of view. If we are willing to ask, and genuinely listen to the response, AI can offer us that opportunity—from an astonishing range of perspectives.

Source: Living Dharma

Image: Danny Greenberg

But that's how it's always been, when change has to happen. There's nobody to do it but us.

Auto-generated description: A weathered poster on a brick wall displays the quote, Freedom is not something that anybody can be given; freedom is something people take, attributed to James Baldwin.

Some of the news coming out of the US at the moment is horrifying. ICE officers seem to be acting with impunity, and in one video we see a driver in an unmarked car casually throw a can of tear gas onto a suburban Chicago street.

Strip everything away and, at the end of the day, as Dan Sinker says in this post it’s just us. Resistance ultimately doesn’t come from political parties, companies, or institutions, but from us.

When I first watched this video, I was seething. So angry the way I feel so often now. An unhelpful level of angry. Angry because of the impunity with which these masked bastards operate. But also angry because we’ve been left to fend for ourselves.

But.

But that’s how it’s always been, when change has to happen. There’s nobody to do it but us.

This is how we live now: it’s just us.

And the good news is that even among the fog, even choking back tears and bile, we’re strong and we’re resilient and there are so many more of us than there are of them.

Source: Dan Sinker

Image: Jason Leung

It will not be compulsory to obtain a digital ID but it will be mandatory for some applications

Auto-generated description: A vibrant pattern of swirling green, blue, and white abstract shapes is covered by a grid of opaque green squares.

The number of people signing the petition entitled ‘Do not introduce Digital ID cards’ is at 2.77m at the time of publishing this post. This is almost a million more than last week, when I published this post on the subject.

Since then, the UK government has responded. And I think it’s a pretty great response. I’ve emphasised in bold the bits I think are particularly important.

That being said, unfortunately, the average reading age of the British population is 11 (source) so parsing nuanced sentences such as “it will not be compulsory to obtain a digital ID but it will be mandatory for some applications” will, unfortunately, confuse quite a lot of people…

The Government has announced plans to introduce a digital ID system which is fit for the needs of modern Britain. We are committed to making people’s everyday lives easier and more secure, to putting more control in their hands (including over their own data), and to driving growth through harnessing digital technology. We also want to learn from countries which have digitised government services for the benefit of their citizens, in line with our manifesto commitment to modernise government.

Currently, when UK citizens and residents use public services, start a new job, or, for example, buy alcohol, they often need to present an assortment of physical documents to prove who they are or things about themselves. This is both bureaucratic for the individual and creates space for abuse and fraud. This includes known issues with illegal working and modern slavery, while the fragmented approach and multiple systems across Government make it difficult for people to access vital services. Further, there are too many people who are excluded, like the 1 in 10 UK adults who don’t have a physical photo ID, so can struggle to prove who they are and access the products and services they are entitled to.

To tackle these interlinked issues, we will introduce a new national digital ID. This is not a card but a new digital identity that will be available for free to all UK citizens and legal residents aged 16 and over (although we will consider through consultation if this should be age 13 and over). Over time, people will be able to use it to seamlessly access a range of public and private sector services, with the aim of making our everyday lives easier and more secure. It will not be compulsory to obtain a digital ID but it will be mandatory for some applications.

For example, the new digital ID will build on GOV.UK One Login and the GOV.UK Wallet to drive the transformation of public services. Over time, this system will allow people to access government services – such as benefits or tax records – without needing to remember multiple logins or provide physical documents. It will significantly streamline interactions with the state, saving time and reducing frustrating paperwork, while also helping to create opportunities for more joined up government services. International examples show how beneficial this can be. For instance, Estonia’s system reportedly saves each citizen hours every month by streamlining unnecessary bureaucracy, and the move to becoming a digital society has saved taxpayer money.

By the end of this Parliament, employers will have to check the new digital ID when conducting a ‘right to work’ check. This will help combat criminal gangs who promise access to the UK labour market in order to profit from dangerous and illegal channel crossings. It will create a fairer system between UK citizens and legal residents, crack down on forged documents, and streamline the process for employers, driving up compliance. Further, it will create business information showing where employers are conducting checks, so driving more targeted action against non-compliant employers.

For clarity, it will not be a criminal offence to not hold a digital ID and police will not be able to demand to see a digital ID as part of a “stop and search.”

Privacy and security will also be central to the digital ID programme. We will follow data protection law and best practice in creating a system which people can rightly put their trust in. People in the UK already know and trust digital credentials held in their phone wallets to use in their everyday lives, from paying for things to storing boarding passes. The new system will be built on similar technology and be your boarding pass to government. Digitally checkable digital credentials are more secure than physical documents which can be lost, copied or forged, and often mean sharing more information than just what is necessary for a given transaction. The new system will be designed in accordance with the highest security standards to protect against a comprehensive range of threats, including cyber-attacks.

We will launch a public consultation in the coming weeks and work closely with employers, trade unions, civil society groups and other stakeholders, to co-design the scheme and ensure it is as secure and inclusive as possible. Following consultation, we will seek to bring forward legislation to underpin this system.

Source: Petitions | UK Government and Parliament

Image: Logan Voss

Until recently, videos were reasonably reliable as evidence of actual events

AI-generated image of Sam Altman wearing magnifying lenses while creating a miniature theatre scene

This article in The New York Times is about the launch of Sora 2, a new AI generative video tool from Open AI. If you want to see how problematic the content it produces can be, check out this video from tech reporter Drew Harwell of The Washington Post.

This is a classic case of technological innovation moving well ahead of regulation. At a time when when US politics has tipped over from libertarianism to authoritarianism, the chances of these kinds of things being used for disinformation is absolutely huge. I mean, we’re at the stage where I, who pride myself on being able to tell when something is fake, just can’t tell the difference.

When you’re being shown these kinds of things over and over again in your social media feeds, there’s just no time to check what’s real and what’s not. So you just end up believing anything. We’re in very weird, and very dangerous times.

George Orwell famously said: “Who controls the past controls the future: who controls the present controls the past.” The video I link to above shows fake clips of Martin Luther King and JFK. We are, as the kids say, “so cooked.”

Sora — as well as Google’s Veo 3 and other tools like it — could become increasingly fertile breeding grounds for disinformation and abuse, experts said. While worries about A.I.’s ability to enable misleading content and outright fabrications have risen steadily in recent years, Sora’s advances underscore just how much easier such content is to produce, and how much more convincing it is.

Increasingly realistic videos are more likely to lead to consequences in the real world by exacerbating conflicts, defrauding consumers, swinging elections or framing people for crimes they did not commit, experts said.

[…]

Sora, which is currently accessible only through an invitation from an existing user, does not require users to verify their accounts — meaning they may be able to sign up with a name and profile image that is not theirs. (To create an A.I. likeness, users must upload a video of themselves using the app. In tests by The Times, Sora rejected attempts to make A.I. likenesses using videos of famous people.) The app will generate content involving children without issue, as well as content featuring long-dead public figures such as the Rev. Dr. Martin Luther King Jr. and Michael Jackson.

The app would not produce videos of President Trump or other world leaders. But when asked to create a political rally with attendees wearing “blue and holding signs about rights and freedoms,” Sora produced a video featuring the unmistakable voice of former President Barack Obama.

Until recently, videos were reasonably reliable as evidence of actual events, even after it became easy to edit photographs and text in realistic ways. Sora’s high-quality video, however, raises the risk that viewers will lose all trust in what they see, experts said. Sora videos feature a moving watermark identifying them as A.I. creations, but experts said such marks could be edited out with some effort.

[…]

“Now I’m getting really, really great videos that reinforce my beliefs, even though they’re false, but you’re never going to see them because they were never delivered to you,” said Kristian J. Hammond, a professor who runs the Center for Advancing Safety of Machine Intelligence at Northwestern University. “The whole notion of separated, balkanized realities, we already have, but this just amplifies it.”

Source: The New York Times

Image: InfoCity

Cultural questions cannot be settled by war metaphors unless what you want is perpetual war

Auto-generated description: A person is sitting and resting in an abandoned industrial building with graffiti on the wall that reads, TIMES HAVE NOT BECOME MORE VIOLENT ONLY MORE TELEVISED.

This essay by Carlo Iacono contains some beautiful, very quotable writing. It’s a good example of what can be produced when an author works with an LLM as an assistant, rather than just offloading the entire process to AI.

A society that thinks of itself as unfinished does not panic when new people arrive. It prepares. It builds welcome centres that feel like actual welcomes, it funds language classes that recognise adult dignity and childhood speed, it supports community organisations that know the local texture better than any central plan. Security and order are not the enemies of hospitality, they are its scaffolding. The point is not to pretend borders do not exist. The point is to design borders that are both humane and workable, so that fear does not have to run the place.

[…]

The most radical thing an executive could do is learn the power of restraint. Strength is not always what you lift, sometimes it is what you put down. A president or a prime minister who distributes authority across competent institutions and insists on processes that can be seen, understood, and challenged is not surrendering leadership. They are proving that leadership serves something larger than the self. Independent justice is not naive. It is the ground on which trust can grow. When charging decisions are insulated from partisan whims, when immigration courts are funded so that due process is not an aspiration but a timetable, we are not being soft. We are being serious. We measure seriousness by how we treat those who have least power.

[…]

We can reimagine how power is shared across place. Federalism, devolution, subsidiarity, these are all local dialects of the same idea, that different levels of government are good at different tasks and should learn from one another. Call it a laboratory spirit. A small country that pilots a high quality early years programme can teach a larger neighbour the method. A city that cracks the problem of bus reliability can hand over the playbook. The point is not a race to the bottom for weak rules and lower taxes, the point is a race to the top for the conditions that let people flourish. You do not have to agree about everything to trade recipes.

Cultural questions cannot be settled by war metaphors unless what you want is perpetual war. The fear that animates exclusionary politics is real, the sense that a way of life is sliding away while someone on television laughs.

[…]

Government does not create society from scratch. It can make the weather better or worse. It can fund the community centre that becomes the hub where a retired electrician teaches teens to repair broken toasters and also broken confidence. It can support local journalism so that rumours are not the only news that travels. It can build parks that are safe and ordinary so that grandparents have somewhere to sit and toddlers have somewhere to learn balance. The choice is almost never between big and small government in the abstract. It is between government that enables human flourishing and government that clogs the works.

[…]

I trust people more than any argument that begins with contempt for them. Not blindly. Not naively. Enough to design systems that enable the best in most of us rather than building the entire apparatus around the statistical worst. When given the chance, people volunteer, they reciprocate respect, they handle power with care more often than not. When someone behaves badly we need rules that respond firmly. When most people behave decently we need rules that do not treat them as suspects.

Source: Hybrid Horizons

Image: Matthew LeJune

OK, but what if...

Oh, so it’s not just me then?

Auto-generated description: A person represents rational thought talking to a brain, highlighting the contrast between perceived brain functions and actual anxiety-inducing thoughts, leading to bedtime restlessness.

Source: The Oatmeal

Motivated not by warm fuzzies, but by cold pricklies

Auto-generated description: A cat with folded ears and a serious expression is looking towards the camera amidst a softly blurred background.

I love this from Adam Mastroianni, who likens annoyance to cholesterol, in that there are good and bad kinds. A good kind of annoyance can make someone look like a Good Samaritan.

Recently, some of my friends were swapping stories about surprisingly kind strangers, and I couldn’t help but notice that every Good Samaritan had acted out of annoyance. A construction worker spotted something amiss with my friend’s bike chain while she was waiting at a red light, and he came over and knocked it back into place, telling her, “I just can’t bear to see it like that.” Another friend was moving into an apartment, and their new neighbor spotted them struggling with a couch and came over to help, muttering “I can’t watch you guys do this on your own.” A third returned an envelope of cash they found because they, “Would hate to be the kind of person who kept it for themselves.”

I think this is actually the way most good-hearted people work: they’re motivated not by warm fuzzies, but by cold pricklies. They help because they can’t stand the sight of someone in need. The golden glow of altruism comes later, if at all, when they’re walking home and thinking about what a good person they are.

The causes that we stick with, then, aren’t the ones that do the most good, nor the ones that align with whatever we think are our most fundamental values. No, we stick with the causes that give us the same perverse pleasure that you get from popping a pimple.

We’d do a lot more for each other if we acknowledged this fact. Altruism doesn’t need to feel like pure self-flagellation or pure self-congratulation. A lot of the time, if you’re doing it right, it’ll feel irritating. Not all heroes wear capes—some of them wear an exasperated look of “are you seriously trying to lift that couch by yourselves”.

Source: Experimental History

Image: 傅甬 华

Anything that looks easy is hard

Auto-generated description: A sun-shaped piñata with sunglasses hangs in a window, casting a bright reflection.

There’s some absolute gems in this list of ‘50 Things I Know’ by Rebecca Dai. #12 in particular is interesting, as it reflects one of Baltasar Gracián’s maxims.

  1. Discipline is a lie. It almost always backfires. Forcing yourself to do something you don’t want to do and living in pain for a sustained amount of time is against human nature. The trick is to set up your life in such a way that effort feels rather effortless. Tread the path of least resistance.

[…]

  1. Most social norms have no consequences when you break them. People who’ve figured this out keep saying “you can just do things” because it’s true. Society is not for individuals. It is for structural stability. You must be intentional about how and why you are participating. Plus, following social norms often requires pretending, another reason to eradicate such useless efforts when possible, which is almost always.

[…]

  1. Your view of the world is immediately narrowed when you open a feed. Whatever shows up will convince you that is what matters, that is what you should pay attention to. If the top 10 posts are about one thing, you will overestimate its true relevance and forget a million things happen every single day. This is the more subtle and arguably more dangerous form of social pressure/manipulation. Don’t pay attention to the noise. Don’t start your day with feeds.

  2. Anything that looks easy is hard. The effort is hidden from you. Anything that seems hard is easier than it’s made out to be. The appearance is to deter you. The only way to know the truth is to do it and find out for yourself.

[…]

  1. I know that judgements and defensiveness come from insecurity. Try the simple exercise of noticing the qualities you judge in others and the qualities you get defensive over. They often align.

[…]

  1. We all need the reminder from time to time that the world is so, so, so big. Whatever we are dealing with crumbles at the cosmic scale.

[…]

  1. You do not have to stay in touch with people whose company you don’t enjoy.

Source: ibehnam

Image: Bhargav Panchal

Being able to intensely live this experience for a day makes you want to revolutionize the world

Auto-generated description: Three brain scan images display highlighted areas with contrasting activity levels, indicated by color gradients, across two different conditions labeled Aphantasics > Controls.

Well this is absolutely fascinating. Aphantasia is an inability to generate mental images. It’s only been really “discovered” that people can’t do this in the last few years.

This article discusses recent research showing that psychedelic drugs can reverse this in some people some of the time. I know that some people microdose on these kinds of things, so as we learn more about the brain, being able to alter our brain chemistry is going to feel akin to gaining superpowers.

One especially interesting case study describes a woman with severe aphantasia who reported that after taking psilocybin mushrooms, for the first time in her life, she was able to form mental images. She even dreamed in pictures – something she had never experienced before. Although the effect faded over time, her description of the experience is remarkable:

“I found it incredible because it was the first time I had images in my mind, and I realized that you can play with images, zoom in, zoom out, break down colors. The possibilities with mental images are endless… it’s an experience of pure mind. It opened up incredible possibilities for me… Being able to intensely live this experience for a day makes you want to revolutionize the world.”

A similar case was reported in a man with severe aphantasia who took ayahuasca, which is a brew containing the potent psychedelic DMT. Following the experience, he noted:

“I can now bring forth faint pictures in my mind. They fade quickly but they are there. When dreaming I now see faint, quickly fading images. It feels like this experience with ayahuasca has slightly opened up my mind’s eye and allowed me to experience internal images like I have never had before.”

These accounts highlight just how dramatically psychedelics can shift perception. Psychedelics also promote neuroplasticity and synaptic growth, which could further explain why some users experience changes in imagination and perception.

Source: psychedelerium

Image: eLife

Really real time?

This video was posted to r/ChatGPT before being deleted and then appearing on r/artificial. Either way, Reddit users are pointing out that this is not “real time” as it takes 1-3 hours to process a 20 second clip like this on a local machine. However, the chances are you could do this in real time if you threw enough cloud computing power at it.

Source: Reddit

To be honest it sounds like NFTs all over again

Auto-generated description: Text art displays the words NET Dollar on a black background with an elliptical shape behind the text.

The idea of micropayments for internet content is almost as old as the internet itself. Just because Cloudflare have used the word “agentic” next to the acronym “AI” doesn’t mean this is a different idea. The only differences with NET Dollar seem to be that it’s automated and based on a stablecoin (i.e. the blockchain). To be honest it sounds like NFTs all over again.

I may be wrong, and have been wrong many times before, but I believe that we can safely predict this is going precisely nowhere. I’m not saying that something like what Cloudflare propose might exist in future, but this particular instantiation is likely to fail— even if only because the incumbent default web payment gateways might have something to say…

NET Dollar will help modernize the payment ecosystem for the future of the agentic web by:

  • Making payments easy anywhere in the world: Agents will need systems to enable payments that are not only fast and secure, but also trusted, recorded transparently, and executed reliably at a global scale – across currencies, geographies, and time zones.
  • Enabling instant, automated transactions: Personal agents will be able to take instant, programmatic actions like paying for the cheapest flight, or ordering an item the moment it goes on sale. Business agents could be instructed to pay suppliers when a delivery is confirmed.
  • Unlocking a new business model for the Internet: NET Dollar will enable creators to be rewarded for unique and original content, developers to easily monetize APIs and applications, and AI companies to contribute back to the ecosystem that fuels them by compensating content sources fairly.

Cloudflare is also contributing to open standards such as the Agent Payments Protocol and x402, to simplify the process of sending and receiving payments on the Internet.

Source: Cloudflare Newsroom

Many other countries also use digital ID of one kind or another

Auto-generated description: A stylized red fingerprint pattern is set against a dark blue background.

I’m quoting this article about the planned introduction of UK digital identity system because, without going into technical details, it’s objective and covers all of the bases. In an ideal world, I wouldn’t want this kind of system. But we don’t live in an ideal world, and this approach seems reasonable given the current state of things.

Why do I say ‘reasonable’? It’s focused on the right to work (therefore students and pensioners aren’t required to have one) it combats existing fraud (people ‘sharing’ National Insurance numbers is rife), and simplifies access to government services. Also, knowing a bit about the technical standard it’s built on, people’s personal details remain on-device with only decentralised identifiers shared with a national registry.

I have no doubt, however, that it will be shot down in flames — even though such systems work well in other countries. It’s interesting that there are people writing in such diverse outlets as The Guardian and _The Telegraph in favour. So maybe once everyone’s calmed down there might be some rational debate.

The timing of this announcement actually makes my job a lot more ‘interesting’ next week as I’m running a workshop for various public bodies in Scotland. I’m helping one of them propose a national digital badging system based on Verifiable Credentials, which is also what underpins the proposed UK digital identity system.

Governments tend to be terrible at infrastructure projects and these days not well trusted by the population. At the time of writing the petition to stop them is at 1.85m signatures so I’m assuming that, far from heading off Reform UK, it will actually mean more people oppose the government and have reason to vote for a different party next time.

The government has announced plans to introduce a digital ID system across the UK, with Prime Minister Sir Keir Starmer saying it will ensure the country’s “borders are more secure”.

The IDs will not have to be carried day-to-day, but they will be compulsory for anyone wanting to work.

The government says the scheme will be rolled-out “by the end of the Parliament” - meaning before the next general election, which by law must be held no later than August 2029.

[…]

The digital IDs will be used to prove a person’s right to live and work in the UK.

They will take the form of an app-based system, stored on smartphones in a similar way to the NHS App or digital bank cards.

Information on the holders' residency status, name, date of birth, nationality and a photo will be included.

Announcing the scheme, Sir Keir said: “You will not be able to work in the United Kingdom if you do not have digital ID. It’s as simple as that.”

The government says the scheme is designed to curb illegal immigration by making it harder for people without status to find jobs. Ministers argue this is one of the key pull factors for migrants entering the UK illegally.

Employers will no longer be able to rely on a National Insurance number - which is currently used as part of proof of right to work - or paper-based checks.

At the moment, it is quite easy to borrow, steal or use someone else’s National Insurance number and that is part of the problem in the shadow economy - people sharing National Insurance numbers for example. The idea is that having a picture attached would make it - in theory - harder to abuse that system.

[…]

Digital ID will be available to all UK citizens and legal residents, and mandatory in order to work.

However, for students, pensioners or others not seeking work, having a digital ID will be optional.

Officials also stress it will not function like a traditional identity card: people will not be required to carry it in public.

Ministers have ruled out requiring the ID for access to healthcare or welfare payments.

However, the system is being designed to integrate with some government services, to make applications simpler and reduce fraud.

The government said that, in time, digital IDs would make it easier to apply for services such as driving licences, childcare and welfare. It said it would also simplify access to tax records.

[…]

The government has promised the system will be “inclusive” and work for those without smartphones, passports or reliable internet access.

A public consultation expected to be launched later this year will include looking at alternatives - potentially including physical documents or face-to-face support - for groups such as older people or the homeless.

[…]

The UK government has said it will “take the best aspects” of digital ID systems used elsewhere around the world, including Estonia, Australia, Denmark and India.

[…]

Many other countries also use digital ID of one kind or another, including Singapore, Greece, France, Bosnia and Herzegovina, the United Arab Emirates, China, Costa Rica, South Korea and Afghanistan.

Source: BBC News

Image: Arthur Mazi

The words we use define boundaries for things, but those boundaries are not universal

Auto-generated description: A graffiti-covered wall features a speech bubble sticker saying BLAW BLAW BLAW and other colorful tags.

I’m immensely grateful to Laura Hilliger for sharing this post. While it covers familiar ground for me (Wittgenstein on games! People that categorise colours differently! The interconnectedness of everything!) it’s a good reminder for me to get back to writing about ambiguity.

It’s 14 years since I wrote an article with my thesis supervisor about ambiguity, and I’ve been fascinated by the topic ever since. I reckon one of the best things you can do to open your mind to all of this is read books like Alice’s Adventures in Wonderland (which I quote in that article) or Flatland. Of course, the danger with doing too much philosophical thinking is that you tear a hole in reality, poke your head through to the other side, and things are never the same again…

First, I’ll explain how language is a very flimsy and arbitrary tool in itself. It’s deceptively simple—even children can use it—yet it’s built on a mountain of assumptions and contingencies that could really be chosen any other way. Second, I’ll try to make the point that, regardless of how precise or imprecise our language is, our habit of distinguishing things from one another doesn’t seem to be justified by how reality is built.

[…]

To begin with, human language gives you the impression of being able to categorize things with names. Our words feel so clear, so unambiguous in our daily lives, that any ambiguity or fuzziness becomes instinctively repulsive to us. We consult dictionaries, we ask for clarifications, we argue and get upset over the “real” meaning of “free will”, “justice”, “consciousness”, and “I’m fine”. Of course we know that words can’t be all that precise, that their meaning depends on context, and so on, but that’s still underestimating just how unreliable they are.

[…]

My point is: the words we use define boundaries for things, giving us handy ways to tell things apart, but those boundaries are not universal. They’re not “in the world”, they’re practical shortcuts that exist only in human heads. If you look really closely, or if you look at the science, there is no strong reason to draw those lines one way or another. There is no defensible distinction between a mountain and a mountain range, or between a mountain range and the Earth’s crust, or between the Earth’s crust and the Earth. A mountain is the Earth, and calling it a “part” of it is a convention that suits us when we want to talk about going places, or studying cloud formations, and other very human goals. Some things seem to have more clear-cut separations between them, like the boundary between an egg’s shell and the air, but even then the demarcation is clear only under certain conditions (i.e. a certain range of temperatures and air pressures) and size scales (i.e. not at the atomic level or in terms of astronomical distances) and time scales (i.e. the egg stops existing as a distinguishable object after a while).

[…]

Now we’ve gone beyond the realm of language, and are talking about the very nature of reality. The universe is one seamless, uninterrupted network of rippling and overlapping differences, and words merely project fuzzy boundaries that need only work well enough for our temporary and circumstantial needs. Even in very limited contexts where the words are relatively precise, our choice of terms to describe anything is arbitrary. In truth, everything is interacting, directly or indirectly, with everything else, and there is no obligatory, objective way to cut that web into separate entities. Nature has no “boundary-formation law” nor requirements for things to clump together and stay clumped long enough that we can give them names. In fact, the laws we have are all about energy transformations and waves and forces pushing and pulling stuff around: none are about keeping things still.

And yet, things do clump together, and they do remain still or stable, and we do have enough time to make up labels for them. The stability we take for granted—from that of the solid objects we see and touch to that of long-lasting processes like photosynthesis and life itself—seems to be some kind of freak accident. Stability comes as a side effect of mutability.

Source: Plankton Valhalla

Image: Mika Baumeister

What we need to do is figure out how we can participate in reality

Auto-generated description: Graph showing Donald Trump's average job approval ratings on various issues like immigration, overall performance, the economy, trade, and inflation, all trending negatively.

Fascists don’t deal with reality. They say that immigrants are eating cats, dogs, or swans. They blame far-right on far-right violence as the work of ‘antifa’. They claim that black is white, up is down, and inside is out.

This post by J.P. Hill uses the recent hoopla around the Rapture supposedly coming yesterday as a way in to discuss all of this. It’s a distraction, it’s entertainment without cost: not even bread and circuses but just AI slopaganda for an abhorrent worldview.

We each have a choice to make right now. On the one hand, the most powerful people on Earth want to lure us away from the truth. They want us to believe their lies, they want us to live in an artificial reality while they steal the land beneath our feet and take the water beneath the land. The ruling class is betting trillions on AI, they’re betting trillions on fascism, they’re doubling down on a system that requires infinite growth on our finite planet. Instead of dealing with reality they tell us we’re all going to Mars one day. Instead of meeting our needs they’re telling us to blame the most marginalized people in society. Instead of offering us truth they offer us a series of lies, a series of imaginary carrots dangled to take us further and further from reality while they pillage the real world all around us.

It’s difficult to imagine changing this paradigm. It can be hard to imagine confronting the brutal nature of our reality and building something better in its place. As Mark Fisher said, “It’s easier to imagine the end of the world than the end of capitalism.” For countless people it seems easier to imagine the apocalypse or the rapture than it is to imagine a better world. And, in fairness, imagining heaven is simple. Imagining the restructuring of society, the construction of egalitarian systems, the implementation of real justice is complicated.

But we don’t need to figure out a perfect world right now. What we need to do is figure out how we can participate in reality. We need to stop seeking escape and seek instead plug in, play a part, take some action out in this world that so desperately needs us. It’s time to accept reality, accept that it’s ugly out there, and accept that we’re the only ones who can change this world. The forces of fascism rely on you tuning out, running from reality, indulging in their fantasies. We have to reject their lies, reject the carrots they dangle, and instead run toward reality and toward active participation in this fucked up world.

Source and image: New Means

Microcast #107 — Apocalyptic events

Auto-generated description: Dark, ominous clouds glow with a fiery red-orange hue, creating a dramatic sky scene.

I found the best Wikipedia page, which reminded me of an awesome episode of the ‘Hardcore History’ podcast.

Show notes

(Note: Dan Carlin sells older podcast episodes on his blog. You can also access the episode for free here)

Microcast #106 — Conversational configuration

Auto-generated description: A complex, abstract geometric structure composed of interconnected cubes and star-like shapes is set against a yellow background.

Thinking about ways in which users can interact with systems in conversational ways which allow apps, platforms, and services to configure themselves to meet user needs.

Show notes

Transcript

Microcast #105 — Being defeated is optional

Microphone

Resurrecting microcasts after 18 months (no intro/outro music yet!) with musings on quotations from Roger Crawford and Epictetus:

Show notes

Transcript

The project of building alternatives to Big Tech is colliding with American authoritarianism

A hand holding a smartphone. The screen shows a folder of Fediverse apps.

Laurens Hof writes a newsletter called Connected Places which really is a must for anyone interested in federated social networks and decentralisation in general. As someone who has led a Fediverse project I retain a professional interest, and of course I have a personal interest as someone with active Mastodon and Bluesky accounts (as well as less active Pixelfed and Bonfire accounts).

This particular post about ‘Blueskyism’ is a difficult one to quickly summarise, as it’s nuanced, but essentially Hof explains the situation that Bluesky (the company) found itself in last week after the assassination of Charlie Kirk. Essentially, because Bluesky (the network) isn’t very decentralised the moderation practices of Bluesky (the company) affect almost everyone on Bluesky (the network).

Although Hof doesn’t mention it specifically, there are some calling the assassination a ‘Reichstag fire’ moment for the US — i.e. a pretext to crack down on the political left. Bluesky (the network) is being painted by those on X (including its owner Elon Musk) as a ‘leftist space’ which needs to be ‘dealt with’ in some way. As it is not very decentralised, someone could buy Bluesky (the company) and effectively shut it down. What’s easier, when you have someone like Trump in power, as we’ve seen with Jimmy Kimmel is that an executive order, threats of tariffs, or some other abuse of power, can effectively silence free speech.

The state of open social networks has rapidly changed. Building social networks that can overtake big tech platforms was always an inherently political project, but recent developments in America have added a new dimension of urgency. Centrist pundits have made an effort to paint Bluesky as a leftist space. Outrage merchants on X share and amplify fabricated narratives about Bluesky users celebrating Kirk’s death, while fascist voices grow louder in their calls to shut down and prosecute all democratic and leftist spaces, which now includes Bluesky. Now, with US congressional demands for censorship and calls to remove Bluesky from app stores, the project of building alternatives to Big Tech is colliding with American authoritarianism.

[…]

Building resilient networks in 2025 means not just architecting against enshittification, but against authoritarianism. The infrastructure for ‘credible exit’ that Bluesky promotes may soon need to encompass not just leaving one ATProto platform for another, but also factor in what happens to the entire open social ecosystem when app stores and governments align against it. When authoritarian governments and tech oligarchs coordinate to eliminate spaces for political opposition, the shape of the solutions, both technological and social, need to account for this new threat. The challenge now is to imagine and build infrastructure that can survive not just bad business decisions, but coordinated political suppression. Building resilient social networks now means preparing for a future where being labeled as a ‘left’ space can get your app removed from app stores, and where the act of maintaining an open protocol becomes an act of resistance.

Source: Connected Places

Image: Elena Rossini

A brick is always a brick, whatever the reasons of the clown chucking it

A pile of bricks

It’s hard to argue against the argument by Aditya Chakrabortty in this article for The Guardian that Labour are now simply the warm-up act for Reform UK. Having seen a majority of my fellow countrymen and women vote for Brexit almost a decade ago, it wouldn’t surprise me if they decided to vote in the chief architect, Nigel Farage.

It’s almost like the definition of the sunk cost fallacy: lurching to right and “taking back control” from the EU didn’t do enough, so let’s go even further and “kick out the immigrants,” eh? Utterly, utterly mad. Anything to stop us paying attention to insane wealth inequality and extraordinarily rich individuals acting like they represent the “will of the people.”

I can’t quickly re-find the source, but I remember someone pointing out that — due to a declining birth rate and ageing populations — it won’t be long before many nations will be competing for immigrants.

At the end of last month, Nigel Farage promised mass deportation of practically anyone seeking asylum in this country, even if it meant handing Afghan women over to the Taliban and sending Iranian dissidents to their deaths. To the press, No 10 didn’t so much as raise an eyebrow at the Reform UK leader referring to other humans as a “scourge” or an “invasion”. For the great unwashed, it posted the most extraordinary advert. “Whilst Nigel Farage moans from the sidelines, Labour is getting on with the job,” it read, showing an image of Starmer stamped with “removed over 35,000 people from the UK”. Why vote for the full-fat hatemongers when diet racists will do the job just fine?

Plenty of Labour people will say they aren’t racist at all, and I wouldn’t wish to argue. But one lesson about prejudice I learned fast growing up was to focus on impact rather than quibble about intention. A brick, in other words, is always a brick, whatever the reasons of the clown chucking it.

[…]

“The British people have a far more nuanced view of immigration than the media and political narrative would have us believe,” observes Nick Lowles of anti-fascist organisation Hope Not Hate in his new book, How to Defeat the Far Right. Only one out of 10 Britons is outright opposed to immigration, while many who identify, say, asylum seekers as a huge issue have never met one. Of the top 50 areas in the UK most vehemently opposed to Muslims, Lowles finds that 27 are in the district of Tendring, in Farage’s constituency of Clacton. Yet how much of Tendring’s population is Muslim? Fewer than one out of 200: 0.4%.

Armed with such findings and a historic majority, Labour could easily counter some of the wild extremism. Ministers might point out that “English patriot” Robinson is an Irish passport-holder (up until last summer, anyway) who hunkers down in Spain and has a list of criminal convictions long enough for a tattoo sleeve. Starmer might observe how much of the UK would simply fall apart without migrants and their children – from your local hospital to the school to the care home. How universities are facing collapse without foreign students and their bumper fees. He might even point out – imagine! – that migrants are human too, with their own lives and dreams for themselves and their families. We could get on to the legacy of empire, and about how the climate crisis and poverty force other populations to move.

[…]

History has a habit of giving little men big tasks. Joe Biden had one job: to stop Donald Trump returning to power. His failure will have consequences for the world. Starmer’s one historic role is to stave off the hard right. He is not only failing, he is paving the way for Farage and his crew. The supposed “centrists” are ushering fringe politics into the mainstream and normalising the abhorrent.

But just listen to the speeches and chants made by the extremists. Robinson no longer talks about small boats; he wants his country back. After years of resisting mass deportations as “impossible”, Farage now touts them as the solution. The Overton window is shifting further and further to the right. The ultimate price for that will not be paid by a politician, but by people far from power: an Ethiopian boy, perhaps, with no family, or an Asian kid looking out the window one evening.

Source: The Guardian

Image: Hal Gatewood

Now is the time to be even more aggressive, not to cower in the face of pressure and criticism

The word 'Chaos' made out of black netting

Paris Marx, who is back on Ghost after a brief flirtation with Substack, takes a look at social media regulation in a recent post. Comparing the regulatory landscapes in the US, UK, and Australia, he argues that “the perfect is the enemy of the good” when it comes to what he calls the “social media harm cycle.”

As he points out, although the regulations are targeting children under the age of 16, the issues around AI and algorithms on social networks affect everyone. We’ve tried our best to keep our two teenagers off algorithmic social networks until they turned 16, but it’s difficult. And it’s not like there’s a magic “maturity and self-control” switch that is turned on when you reach specific ages.

Instead, and this isn’t something I would have advocated for a decade ago, regulation is required to break the loop of algorithmic addiction. Back in 2012 when my son and daughter were five years old and one year old, respectively, I argued that “The best filter resides in the head, not in a router or office of an Internet Service Provider (ISP).” I’m still anti-censorship, but we’ve managed to allow Big Tech to have far too much control over our everyday lives.

These algorithms are perhaps the most powerful shapers of society at the moment — which is why it’s kicking off everywhere. They’re rage machines.

In the past, I might have been more hesitant about these efforts to ramp up the enforcement on social media platforms and even to put age gates on the content people can access online. But seeing how tech companies have seemingly thrown off any concern for the consequences of their businesses to cash in on generative AI and appease the Trump administration, and seeing how chatbots are speedrunning the social media harm cycle, many of my reservations have evaporated. Action must be taken, and in a situation like this, the perfect is the enemy of the good.

I don’t support the US measures that are effectively the imposition of social conservative norms veiled in the language of protecting kids online. But I am much more open to what is happening in other parts of the world where those motivations are not driving the policy. Personally, I think the Australians are more aligned with an approach I’d support.

They’re specifically targeting social media platforms, rather than the wider web as is occurring in the UK, and the mechanism of their enforcement surrounds creating accounts. So, for instance, now that YouTube will be included in the scheme, that means users under 16 years of age cannot create accounts on the platform — that would then enable collecting data on them and targeting them with algorithmic recommendations — but they can still watch without an account. There are still concerns around the use of things like face scanning to determine age, but in my view, it’s time to experiment and adjust as we go along.

Even with that said, if I was crafting the policy, I would take a very different approach. It’s not just minors who are harmed by the way social media platforms are designed today — virtually everybody is, to one degree or another. While I support experimenting with age gates, my preferred approach would focus less on age and more on design; specifically, severed restrictions algorithmic targeting and amplification, limiting data collection and making it easier for users to prohibit it altogether, and developing strict rules on the design of the platforms themselves — as we know they use techniques inspired by gambling to keep people engaged.

To be clear, the Australians and the Brits are looking into those measures too — if not already rolling out some measures along those lines. These are actions we need to take regardless of the politics behind the platforms, but given how Donald Trump and many of these executives are explicitly trying to use their power to stop regulation and taxation of US tech companies, now is the time to be even more aggressive, not to cower in the face of pressure and criticism.

Source: Disconnect

Image: Declan Sun

People living today are almost never the descendants of the people in the same place thousands of years before

Grafitti saying 'no one is illegal'.

At a time when nationalists, white supremacists, and fascists would have you believe otherwise, it’s worth reminding ourselves that we are essentially a migratory species.

[I]ncreasingly sophisticated analysis of genetic material made possible by technological advances shows that virtually everyone came from somewhere else, and everyone’s genetic background shows a mix from different waves of migration that washed over the globe.

“Ancient DNA is able to peer into the past and to understand how people are related to each other and to people living today,” [Harvard geneticist David] Reich said during a talk at the Radcliffe Institute for Advanced Study. “And what it shows is worlds we hadn’t imagined before. It’s very surprising.”

Human populations have been in flux for tens of thousands of years since our emergence from Africa. The details of the still-developing picture are complex, but the overall theme is one of increasing homogenization since human diversity fell from the time when modern humans lived next door to Neanderthals, two strains of Denisovans, and the diminutive Homo floresiensis of Indonesia.

[…]

“The big perspective change from ancient DNA study is that people living today are almost never the descendants of the people in the same place thousands of years before,” Reich said. “Human movements have occurred at multiple timescales, often disruptive to the populations that experience them, and these patterns were not possible to predict and anticipate without direct data.”

Source: The Harvard Gazette

Image: Miko Guziuk

You must not talk about the future. The future is a con.

A vintage photograph of two donkeys hitched to a wooden cart feeding in a brick street with a collage of technology as their load.

Emily Segal’s talk at FWB FEST last month was entitled The End of Trends, and this post both embeds the talk and provides an edited transcript (which I appreciate: more people should do this!)

My understanding of the productively ambiguous post is that trying to predict the future based on “trends” is probably best left to AI, which can sift through much more information that humans can. Instead, we should be focusing on the more “grounding” perspective of trying to enrich the present moment.

I like this approach. Goodness knows it’s anxiety-provoking for people like me to extrapolate existing behaviours into the future. While some might say this nullifies effective action, I’d actually argue the opposite: it prevents paralysis and provides a bias towards action in the here-and-now.

In Canto 20 of Dante’s Inferno, Dante and Virgil visit the circle of hell reserved for astrologers and soothsayers. Their punishment for trying to see too far ahead in life is that their heads are twisted backward. Their hair runs down their fronts. They stumble forward while always looking behind them.

Dante breaks the fourth wall here: he says it’s wrong to pity anyone in hell, but he can’t help pitying the soothsayers. I think many of us are in a similar predicament now – trying to move forward while constantly looking backward.

[…]

I often think of Ursula K. Le Guin’s The Left Hand of Darkness, in which a group of “Foretellers” pool consciousness to answer questions they could not access individually. The weaver in the group maintains tension in the pattern until it breaks, revealing the answer.

And I think of Alejandro Jodorowsky’s caution:

“You must not talk about the future. The future is a con. The tarot is a language that talks about the present. If you use it to see the future, you become a charlatan. … Everything is linked, but nothing is a matter of probability.”

In an age surrounded by probability machines, this perspective is grounding. So I leave you with this: Will you become a novum – a machine-like thinker, a better-than-chance oracle in a human body? Or will you focus on enriching the present, making it better in the moment?

Source: Nemesis Memo

Image: Suraj Raj & Digit

Asking, Doing, or Expressing?

Auto-generated description: A chart displays various granular conversation topic shares represented by colored horizontal bars, each with labels indicating different topics and their percentage contributions to the total population.A few months ago, when I shared the work of Marc Zao-Sanders about how people use generative AI, I noted that his “a rigorous, expert-driven curation of public discourse, sourced primarily from Reddit forums” didn’t actually include a methodology.

In the last few days, OpenAI has released a paper in collaboration with scholars at Duke University and Harvard University which would suggest that people’s actual use of ChatGPT is… quite difference to the picture that Zao-Sanders gave. To me, that’s unsurprising given that he was sourcing his insights from Reddit, which skews young and male.Auto-generated description: A bar graph illustrates the difference in the share of topic prevalence in messages between users with typically masculine and feminine names across various categories, with feminine names favoring topics like social interaction and self-expression, while masculine names lean towards technical writing and persuading others. Interestingly, the demographics of ChatGPT users have changed markedly in the years since it was released. Apparently, in the first few months after it was made available, four-fifths of active users had “typically masculine first names” with that number dropping to less than half as of June 2025. Now, active users are slightly more likely to have typically feminine first names. It seems like different genders use generative AI differently, too (Fig.19).Finally, it’s telling that, as the use of ChatGPT grew five-fold from June 2024 to June 2025 the share of “non-work” uses rose from 53% to 73%. It’s definitely interesting times. Don’t mind me, I’m off to re-watch Her (2013).

First, we show evidence that the gender gap in ChatGPT usage has likely narrowed considerably over time, and may have closed completely. In the few months after ChatGPT was released about 80% of active users had typically masculine first names. However, that number declined to 48% as of June 2025, with active users slightly more likely to have typically feminine first names. Second, we find that nearly half of all messages sent by adults were sent by users under the age of 26, although age gaps have narrowed somewhat in recent months. Third, we find that ChatGPT usage has grown relatively faster in low- and middle-income countries over the last year. Fourth, we find that educated users and users in highly-paid professional occupations are substantially more likely to use ChatGPT for work.

We introduce a new taxonomy to classify messages according to the kind of output the user is seeking, using a simple rubric that we call Asking, Doing, or Expressing. Asking is when the user is seeking information or clarification to inform a decision, corresponding to problem-solving models of knowledge work… Doing is when the user wants to produce some output or perform a particular task, corresponding to classic task-based models of work… Expressing is when the user is expressing views or feelings but not seeking any information or action. We estimate that about 49% of messages are Asking, 40% are Doing, and 11% are Expressing. However, as of July 2025 about 56% of work-related messages are classified as Doing (e.g., performing job tasks), and nearly three-quarters of those are Writing tasks. The relative frequency of writing-related conversations is notable for two reasons. First, writing is a task that is common to nearly all white-collar jobs, and good written communication skills are among the top “soft” skills demanded by employers (National Association of Colleges and Employers, 2024). Second, one distinctive feature of generative AI, relative to other information technologies, is its ability to produce long-form outputs such as writing and software code.

We also map message content to work activities using the Occupational Information Network (O*NET), a survey of job characteristics supported by the U.S. Department of Labor. We find that about 58% of work-related messages are associated with two broad work activities: 1) obtaining, documenting, and interpreting information; and 2) making decisions, giving advice, solving problems, and thinking creatively. Additionally, we find that the work activities associated with ChatGPT usage are highly similar across very different kinds of occupations. For example, the work activities Getting Information and Making Decisions and Solving Problems are in the top five of message frequency in nearly all occupations, ranging from management and business to STEM to administrative and sales occupations.

Overall, we find that information-seeking and decision support are the most common ChatGPT use cases in most jobs. This is consistent with the fact that almost half of all ChatGPT usage is either Practical Guidance or Seeking Information. We also show that Asking is growing faster than Doing, and that Asking messages are consistently rated as having higher quality both by a classifier that measures user satisfaction and from direct user feedback.

…We argue that ChatGPT likely improves worker output by providing decision support, which is especially important in knowledge-intensive jobs where better decision-making increases productivity (Deming, 2021; Caplin et al., 2023). This explains why Asking is relatively more common for educated users who are employed in highly-paid, professional occupations. Our findings are most consistent with Ide and Talamas (2025), who develop a model where AI agents can serve either as co-workers that produce output or as co-pilots that give advice and improve the productivity of human problem-solving.

Source & image: OpenAI

We tell ourselves the story of human uniqueness like a bedtime prayer

Neon sign of praying hands seen through a fibreglass window

While I think this post overestimates the ‘rupture’ of AI, it is very well-written and certainly makes the reader think about the “clean arguments and messy implications” of “computation without consciousness” in a market which is a “sorting engine for outcomes.”

For me, though, the thing which is missing from this otherwise well-written piece is that human exceptionalism applies to everything we do. We are full of paradoxes: we keep some kinds of animals as pets and eat other ones. We talk about the wonder of Nature while destroying it. We see ourselves as separate to the natural order of things rather than part of it.

I find it interesting to see which people get worked up about AI. Writers and artists, for sure, as their livelihoods are on the line. But also, I would suggest (and separately) people who see humanity as somehow special and unique—without, necessarily, being able to describe what that uniqueness is.

The podcast episode I was listening to this morning on the philosophy of self-destruction is illustrative here. Georges Bataille a philosopher I’d never before heard of, argued that the thing that makes us unique is our tendency to self-destruction and sacrifice. What that means in relation to “the boundary between human value and human utility” I’ll leave for you to decide.

We tell ourselves the story of human uniqueness like a bedtime prayer. We are the animal that understands. We are the creature that feels. We are the author of meaning. Then the machines arrive with more memory than our institutions, more patience than our professions, and an ability to synthesise that makes much of our work look like the slow rearrangement of furniture. We retreat to consciousness and call it the final moat. Perhaps it is. The trouble is that markets do not pay for qualia. They pay for results. A system that can pass for understanding in most practical situations is enough to reprice our worth, even if it experiences nothing while doing so.

[…]

What makes the present rupture stranger [than previous ones] is not that tasks are being automated. That is an old trick. It is that the rungs we used to climb are melting away beneath our feet. The legal apprentice once learned by reading mountains of documents. The junior developer learned by slogging through bugs. The analyst learned by cleaning data until patterns flashed in the mind like weather. Now the entry work evaporates. A machine does it in minutes and does not complain. We congratulate ourselves on efficiency and then discover we have created an experience cliff. We are asking people to supervise work they were never allowed to do. Even the best intentioned upskilling will falter if the pipeline that produces intuition has been hollowed out. This cliff also suggests a remedy in simulated apprenticeship, a deliberate redesign of early careers where newcomers learn by validating and correcting machine output rather than by doing the drudgework the machine has removed. It is a shrewd answer, and it may be the only bridge we can build at speed.

[…]

Philosophy arrives, as it tends to, with clean arguments and messy implications. You can hold on to the view that computation without consciousness is never true understanding and still lose the economic game. You can be right about the inside of experience and wrong about the price of it. To be consoled by the thought that machines do not feel while they outpace us in most of the places society rewards is to win a metaphysical medal and find no one pays for medals. The market is not a seminar. It is a sorting engine for outcomes.

[…]

The line we have been defending is the boundary between human value and human utility, and we have treated them as if they were the same. We have been racing to remain useful because our institutions can only recognise worth through productivity and pay. A civilisation that automates most of its work must decide whether it will abandon people or invent a new grammar for dignity. We can reform education until the syllabi shine and still fail if graduation delivers people into a labour market that no longer needs them. We can preach lifelong learning as a secular catechism and still feel the hollowness if learning has nowhere meaningful to land.

Source: Hybrid Horizons

Image: Drew Beamer

Most people could read extra lines on eye test charts after using the drops

Auto-generated description: A person is sitting with legs crossed, reading a book in warm, dappled sunlight.

Part of growing older is realising things that happened to ‘old’ people now are happening to you. For example, the classic middle-age (pre-reading glasses) technique of moving a mobile phone or book further away so that you can focus on it properly.

I’ve noticed my wife, who is the same age as me, doing this—and recently, I’ve notice it can take me a second to focus, too. Unlike her, I wear contact lenses, so requiring reading glasses will mean some kind of varifocal solution. While multifocal contact lenses exist, I can imagine this approach would be much better.

Hundreds of millions of people worldwide have presbyopia, which is when the eyes find it difficult to focus on objects and text up close. Glasses or surgery can usually resolve the problem but many find wearing spectacles inconvenient and having an operation is not an option for everyone.

Now experts say the solution could be as simple as using eye drops twice a day.

A study presented on Sunday at the European Society of Cataract and Refractive Surgeons (ESCRS) in Copenhagen showed that most people could read extra lines on eye test charts after using the drops. The improvement was sustained for two years.

[…]

The drops contain pilocarpine, a drug that constricts the pupils and contracts the muscle that controls the shape of the eye’s lens to enable focus on objects at different distances, and diclofenac, a non-steroidal anti-inflammatory drug (NSAID) that reduces inflammation.

“Impressively, 99% of 148 patients in the 1% pilocarpine group reached optimal near vision and were able to read two or more extra lines.”

Source: The Guardian

Image: Blaz Photo

These images are made from open access sources, and they are themselves open access

Two caricatures of top-hatted millionaires whose bodies are bulging money-sacks. Their heads have been replaced with potatoes. The potatoes' eyes have been replaced with the hostile red eye of HAL 9000 from Kubrick's '2001: A Space Odyssey.' They stand in a potato field filled with stoop laborers. The sky is a 'code waterfall' as seen in the credit sequences of the Wachowskis' 'Matrix' movies.

For the last few years, in the time between Christmas and New Year, I’ve created a collage using issues of The Guardian Weekly and any other magazines I’ve found (example). Cory Doctorow, who makes everyone feel like an underperformer, creates one every day.

Thankfully, he also believes in open working and sharing, meaning we all get to use them under a permissive license) (in this case, CC BY-SA). He’s also collated his favourites into a limited-edition book. Because of course he has.

_ Canny Valley_ collects 80 of the best collages I’ve made for my Pluralistic newsletter, where I publish 5-6 essays every week, usually headed by a strange, humorous and/or grotesque image made up of public domain sources and Creative Commons works.

These images are made from open access sources, and they are themselves open access, licensed Creative Commons Attribution Share-Alike, which means you can take them, remix them, even sell them, all without my permission.

I never thought I’d become a visual artist, but as I’ve grappled with the daily challenge of figuring out how to illustrate my furious editorials about contemporary techno-politics, especially “enshittification,” I’ve discovered a deep satisfaction from my deep dives into historical archives of illustration, and, of course, the remixing that comes afterward.

Source: Pluralistic

Image: CC BY-SA Cory Doctorow

A 10y old phone can barely load google, and this is about 100x slower

Pixellated image of the innards from a disposable vape

If you visit dougbelshaw.com you will notice that the site loads instantly, no matter the speed of your connection or which device you’re on. That’s because it’s a mere 7.7kB in size. I did have it under 1kB, but I added a JavaScript effect, and a favicon.

That means that I could host this website on pretty much anything I choose — including, it turns out, a vape. That’s right, the microcontroller running inside a disposable vape is about the same clock speed as early 1990s personal computers. While they had more RAM and storage space, landfill considerations aside, it’s pretty incredible that someone has managed to run a website from things that are being used as fancy “pacifier for adults.”

I’m not here to scold anyone, but we’re so used to autoplaying videos these days, even on news sites, that we don’t question the impact that’s having on the energy consumption of the world. Combined with an increasing amount of dark data and perhaps it’s time to consciously minimise our digital footprints? Also, it’s cool to be able to host your website yourself on something like a vape. Much cooler than using them for their intended purpose!

For a couple of years now, I have been collecting disposable vapes from friends and family. Initially, I only salvaged the batteries for “future” projects (It’s not hoarding, I promise), but recently, disposable vapes have gotten more advanced. I wouldn’t want to be the lawyer who one day will have to argue how a device with USB C and a rechargeable battery can be classified as “disposable”. Thankfully, I don’t plan on pursuing law anytime soon.

Last year, I was tearing apart some of these fancier pacifiers for adults when I noticed something that caught my eye, instead of the expected black blob of goo hiding some ASIC (Application Specific Integrated Circuit) I see a little integrated circuit inscribed “PUYA”. I don’t blame you if this name doesn’t excite you as much it does me, most people have never heard of them. They are most well known for their flash chips, but I first came across them after reading Jay Carlson’s blog post about the cheapest flash microcontroller you can buy. They are quite capable little ARM Cortex-M0+ micros.

Over the past year I have collected quite a few of these PY32 based vapes, all of them from different models of vape from the same manufacturer. It’s not my place to do free advertising for big tobacco, so I won’t mention the brand I got it from, but if anyone who worked on designing them reads this, thanks for labeling the debug pins!

[…]

So here are the specs of a microcontroller so bad, it’s basically disposable:

  • 24MHz Coretex M0+
  • 24KiB of Flash Storage
  • 3KiB of Static RAM
  • a few peripherals, none of which we will use.

You may look at those specs and think that it’s not much to work with. I don’t blame you, a 10y old phone can barely load google, and this is about 100x slower. I on the other hand see a blazingly fast web server.

Source: BogdanTheGeek’s Blog

Image: modified from original included in source blog posts (using Dither It!)

Secure backups let you save an archive of your Signal conversations in a privacy-preserving form

Screenshots of new Signal secure backups feature

Recently, my Dad upgraded his iPhone and needed to move all of his apps from one phone to another. As anyone has done this will know, messages from encrypted messaging services such as WhatsApp and Signal usually have to be backed-up and then restored separately to the rest of the app transfer.

WhatsApp makes this easy, but much less secure, by allowing users to back up to Google Drive or iCloud. This is, by default, not encrypted, so it’s an easy vector for hackers and state-level actors to target. Signal, on the other hand, requires either device-to-device transfer of messages, or manual backup and restore.

Signal has just announced secure backups, which is a major step forward. After all, while you could regularly auto-backup Signal chats to local storage, if you lost or broke your phone, those messages were irretrievably lost.

After careful design and development, we are now starting to roll out secure backups, an opt-in feature. This first phase is available in the latest beta release for Android. This will let us further test this feature in a limited setting, before it rolls out to iOS and Desktop in the near future.

[…]

Secure backups let you save an archive of your Signal conversations in a privacy-preserving form, refreshed every day; giving you the ability to restore your chats even if you lose access to your phone. Signal’s secure backups are opt-in and, of course, end-to-end encrypted. So if you don’t want to create a secure backup archive of your Signal messages and media, you never have to use the feature.

[…]

This is the first time we’ve offered a paid feature. The reason we’re doing this is simple: media requires a lot of storage, and storing and transferring large amounts of data is expensive. As a nonprofit that refuses to collect or sell your data, Signal needs to cover those costs differently than other tech organizations that offer similar products but support themselves by selling ads and monetizing data.>

[…]

Once you’ve enabled secure backups, your device will automatically create a fresh secure backup archive every day, replacing the previous day’s archive. Only you can decrypt your backup archive, which will allow you to restore your message database (excluding view-once messages and messages scheduled to disappear within the next 24 hours). Because your secure backup archive is refreshed daily, anything you deleted in the past 24 hours, or any messages set to disappear are removed from the latest daily secure backup archive, as you intended.

Source & image: Signal blog

The FBI announced the alleged shooter’s apprehension with a quote from Mad Max

Auto-generated description: A colorful, abstract 3D terrain with fluid-like textures is set against a starry black background.

I’m not going to comment on the death of Charlie Kirk, but I would like to point to Garbage Day, the newsletter by Ryan Broderick which I quote regularly here on Thought Shrapnel. For me, it’s an essential aid to understand the world as it is today.

Another newsletter called Today in Tabs summarised the Garbage Day post I’m going to cite here the following way:

So in summary: it appears that this was a shooting where the victim, an influencer, was answering a question from another influencer when he was shot by a third influencer, after which a fourth influencer documented the ensuing chaos and a host of other influencers registered their takes, before the director of the FBI (an influencer) and the deputy director of the FBI (another influencer) announced the alleged shooter’s apprehension with a quote from Mad Max.

This is the way the world is today: confusing, and extremely online.

The Garbage Day post is therefore really useful and insightful. It explains terms such as “groyper” that I haven’t come across before and, as the father of an 18 year-old boy (man!), these are things it’s important to know about and discuss/share with teenagers. Well worth a read.

It’s also possible [suspect Tyler] Robinson genuinely believes in antifascist principles. But his alleged use of random internet brainrot is notable. Many extremism researchers this morning are wondering if Robinson is a self-identified “groyper,” or follower of far-right streamer Nick Fuentes. As we wrote yesterday, Fuentes has spent years attacking Kirk online. Groypers believed that Kirk was a sellout and blocking a much more extreme version of Trumpism from taking root. For years, Groypers have been carrying out what they call “Groyper Wars,” attending Kirk’s events and trying to disrupt them. For what it’s worth, 4chan users think Robinson was a Groyper.

Source: Garbage Day

Image: Steve Johnson

There's nothing they can do with the information

Auto-generated description: Numerous surfers spread out across clear blue water, each with a surfboard, waiting to catch waves.

In general, there’s a difference between “being an informed citizen” and “being a news junkie.” Due to the fact that most people now get their ‘news’ via social media, and social networks are mostly algorithmic, there is often an emotional and/or partisan filter through which people obtain information. This is not good for our individual or collective mental health.

As a result, record numbers of people—including me— are limiting the amount of news they consume. Or at least how they consume it. I’ve even mostly stopped listening to The Rest is Politics, formerly one of my favourite podcasts. As this article points out, you have to be able to do something with the information you receive.

Globally, news avoidance is at a record high, according to an annual survey by the Reuters Institute for the Study of Journalism published in June. This year, 40% of respondents, surveyed across nearly 50 countries, said they sometimes or often avoid the news, up from 29% in 2017 and the joint highest figure recorded.

The number was even higher in the US, at 42%, and in the UK, at 46%. Across markets, the top reason people gave for actively trying to avoid the news was that it negatively impacted their mood. Respondents also said they were worn out by the amount of news, that there is too much coverage of war and conflict, and that there’s nothing they can do with the information.

[…]

Studies suggest that increased exposure to news – particularly via television and social media, and especially coverage of tragic or distressing events – can take a toll on mental health.

[…]

A growing body of advice online promotes healthier ways to consume news. Much of it focuses on creating guardrails so people can be deliberate about finding information when they’re ready for it, instead of letting it reach them in a constant stream. This might include signing up for newsletters or summaries from trusted sources, turning off news alerts and limiting social media.

Source: The Guardian

Image: Buddy AN

99.9% of opinions on the internet don’t matter

Auto-generated description: Aerial view of a lone green patch surrounded by a vast golden field with tractor lines creating an outline around it.

Good stuff, as ever, from Jay Springett. He’s ostensibly talking about arguing on the internet, but this post is really about identity. Your identity might be reflected in the things you do or like, but this does not comprise the sum total of that identity.

Now, I get it, I totally do. I understand that when ones identity has been so completely ‘formatted’ by social platforms and consumer capitalism that an attack on a media property, tv show, album, podcast, game, book, football team or whatever, feels like an attack on your own identity as a person. One can’t help feel the need to go to war, to protect yourself. You aren’t the media you consume, and media properties aren’t your friends. Why argue or care about if genre fiction “is real literature” or not? I suspect its because people feel like they need validation for their choice of media diet? Validation for the amount of time and energy one has spent putting ones attention towards a certain interest. This need for validation results in people expressing their taste online, not by sharing what they love, but by fighting with someone who doesn’t.

[…]

There is a fundamental truth about the internet, and it also applies to building/having an audience: 99.9% of opinions on the internet don’t matter. You don’t know these people, and they don’t know you. Other peoples approval won’t keep you warm but the perceived lack of it will keep you awake at night. Their disapproval also shouldn’t stop you from loving the thing. You don’t need anyones approval to post on the internet, you can just do things, and like stuff.

The only people whose opinions really matter in this world are the ones expressed from across the table. From your family and friends over dinner. The people in your life who’ll ask your recommendations because they know that your taste is your own.

Source: thejaymo

Image: Kristaps Ungers

An open, decentralised protocol making clear to AI crawlers and agents the terms for licensing, usage, and compensation

This image illustrates digital transformation gone wrong, where AI becomes a tool for intensified extraction. Workers operate sewing machines endlessly producing streams of spreadsheets and reports. Instead of liberating labour, AI automation can lock workers into more exhausting cycles of output, without increasing agency or rewards.

Announced Wednesday morning, the “Really Simple Licensing” (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.

Free for any publisher to use starting today, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI, a press release noted.

The standard was created by the RSL Collective, which was founded by Doug Leeds, former CEO of Ask.com, and Eckart Walther, a former Yahoo vice president of products and co-creator of the RSS standard, which made it easy to syndicate content across the web.

Based on the “Really Simple Syndication” (RSS) standard, RSL terms can be applied to protect any digital content, including webpages, books, videos, and datasets. The new standard supports “a range of licensing, usage, and royalty models, including free, attribution, subscription, pay-per-crawl (publishers get compensated every time an AI application crawls their content), and pay-per-inference (publishers get compensated every time an AI application uses their content to generate a response),” the press release said.

[…]

Eckart [Walther, co-creator of RSS] had watched the RSS standard quickly become adopted by millions of sites, and he realized that RSS had actually always been a licensing standard, Leeds said. Essentially, by adopting the RSS standard, publishers agreed to let search engines license a “bit” of their content in exchange for search traffic, and Eckart realized that it could be just as straightforward to add AI licensing terms in the same way. That way, publishers could strive to recapture lost search revenue by agreeing to license all or some part of their content to train AI in return for payment each time AI outputs link to their content.

Source: Ars Technica

Image: Leo Lau & Digit

Be intentional with how you spend your time, and realise you actually have a surprising amount of it

Auto-generated description: A chaotic tangle of thin white lines crisscrosses against a black background, resembling a web or abstract pattern.

As quoted in The Marginalian, writer Annie Dillard famously stated “How we spend our days is how we spend our lives.” The longer quotation, arguing in favour of adding structure and a schedule to your day goes like this:

How we spend our days is, of course, how we spend our lives. What we do with this hour, and that one, is what we are doing. A schedule defends from chaos and whim. It is a net for catching days. It is a scaffolding on which a worker can stand and labor with both hands at sections of time. A schedule is a mock-up of reason and order—willed, faked, and so brought into being; it is a peace and a haven set into the wreck of time; it is a lifeboat on which you find yourself, decades later, still living. Each day is the same, so you remember the series afterward as a blurred and powerful pattern.

Although he cites Tim Urban as an influence rather than Annie Dillard, this post by Nathan Brown makes a similar point. We have a finite amount of time on this earth, and finite number of hours available to us each week. A surprising number of these are discretionary—as in, whether it feels like it or not, we can choose how to use them.

I’m not a huge hustler. I’m not necessarily advocating that you spend your 52 hours/week building a startup or working an extra job. But I’m also an advocate for not being super lazy and sitting around and watching TikTok/YouTube all day.

I guess my point is to be intentional with how you spend your time, and to realize you actually have a surprising amount of it, once you account for all the essentials. What percentage of your discretionary time do you want to spend on…

  • hanging out with friends
  • bettering yourself
  • outdoor activities
  • volunteering
  • creative expression (art, writing, etc.)
  • entertainment

You choose—seriously. Not trying to guilt-trip you into anything. But I will be damned if I go through my life spending 10 of my discretionary hours/week watching Instagram Reels and then my gravestone says:

“Nathan was a kind, loving soul. His greatest achievement was watching 7,000,000 Instagram Reels.”

Source: Nathan Brown

Image: Resource Database

Grid-forming batteries will ultimately corner the stability market thanks to their inherent multifunctionality

The Blackhillock grid-scale battery, located between Inverness and Aberdeen in Scotland

As the UK moves steadily towards a fully-renewable future, one of the issues can be stabilising the power grid when electricity suddenly drops or spikes. Wind and solar energy can, after all, be unpredictable. Traditionally, fossil fuel power stations have helped with this stabilisation, but these are being shut down to cut emissions and fight climate change.

New ‘grid-scale’ batteries are being build which act like giant backup reservoirs for electricity. They store extra power when there’s a surplus (e.g. sunny days, windy nights) and then quickly release this to the grid whenever there’s an unexpected drop. As the battery doesn’t burn fuel or make pollution, it’s great for the environment, and the new technology is fast enough to fill the power gap nearly instantly.

Zenobē’s global director of network infrastructure, Semih Oztreves, predicts that grid-forming batteries will ultimately corner the stability market thanks to their inherent multifunctionality. While synchronous condensers mostly sit idle, waiting for a rare grid fault, Zenobē’s advanced batteries earn daily revenue by doing what most other storage sites do. For example, they arbitrage energy, absorbing power when it’s cheap and selling when supplies get tight.

But the short-circuit chops of grid-forming batteries haven’t yet faced a real-life test. Until then, doubts linger about whether transmission relays will respond appropriately to the inverters’ digitally defined surge of current. In a report last year for Australian grid operator Transgrid, one expert advised against overreliance on grid-forming inverters for short-circuit current, saying that it would carry “high to very high risk.” The utility later announced 10 synchronous condensers and 5 grid-forming batteries to bolster its grid.

Source & image: IEEE Spectrum

Your actions follow your self-beliefs

Auto-generated description: A person is holding a small ornate mirror in front of their face, creating a surreal effect as they wear a white shirt outdoors.

Something I say semi-regularly to friends and family is “people can only treat you the way you let them.” Inspired by quotations from Seth Godin and James Clear, this post is the other side of the coin. It’s about the way you treat yourself.

This year, I’ve had tests for a whole range of things. One by one I’ve discovered there’s nothing wrong with the structure of my heart, with my lungs, thyroid, adrenal glands—and I’m not anaemic. As the year has gone on, I’ve gotten slightly better, and then a lot better.

I’m still not fully right, as I can’t run or do some of the more intense physical things I used to do. But I’ve got a new self-image, one that’s perhaps more appropriate to the age I am. Not that I should expect less of myself, but that I should expect different things.

Your actions follow your self-beliefs.

If you identity as a failure, incapable of achievement, unfit, unlovable, destined to play a bit-part role in your own story, then by heck no matter how much willpower you put in to push that boulder up the hill, it will return to its place.

But there’s a way through: every action you take is a vote for who you wish to become. Every day you wake up, look your old identity in the eye and say “thanks for your service, but you’re not needed around here anymore,” step forward and lean in, is a day your new identity is built.

It takes time. You have to actually want it. You have to choose to adopt a new mindset. Rome wasn’t built in a day. But it comes, a little like how Hazel Grace Lancaster describes falling in love in The Fault In Our Stars: “slowly, and then all at once.”

The path is there, should you choose.

Source: @fredrivett

Image: chloe combs

Each of us is part of an interpretive community that gives us a particular way of reading a text

Auto-generated description: A side mirror on a vehicle reflects a colorful sign saying ALWAYS BE YOURSELF along a street scene.

I’m not sure why I’m sharing this post other than it reaffirms my commitment to stay off social networks such as X, Instagram, and TikTok. The suggestions towards the end (use RSS! embrace federated social networks! switch to Linux/GrapheneOS/Signal!) I’ve already embraced, but the thing is that unless a significant minority of people do this, it makes very little difference on the rest of the world, which is effectively powered by algorithm.

Literary theorist Stanley Fish argued that we as individuals interpret any given text (in this case, social media content) “because each of us is part of an interpretive community that gives us a particular way of reading a text.” That interpretive community usually isn’t there when we are fed what the algorithm thinks we’ll consume. We may share something thinking that people like us will see it share their opinions. However, because of the way that algorithms work, engagement is the main driver of a post’s visibility; so here comes 10 million people who have no clue of the context on that thought you shared about your niche interest.

Suddenly your post is full of over-the-top jokes and non-content-related quips from members of a completely random mix of audiences. As the algorithm prioritizes engagement, your post’s new mixing pot of clueless audiences outnumbers the genuinely-interested audience of your own niche corner of the internet (if that even exists anymore), and they comment about everything BUT the content.

Source: tékhnē.dev

Image: Kevin Grieve

I’m pretty confident you only need two things. Feedback and humility, and they work best together.

Auto-generated description: A hand is holding a perforated board with the word #feedback on it.

I was listening to a podcast recently about the concept of “limitarianism” entitled Imagine there’s no billionaires in which the political philosopher Ingrid Robeyns laid out the ways in which, truly, every billionaire is a policy failure. Nobody accumulates great wealth without some sort of dependency on society — whether that’s tax breaks, lack of worker regulation, or some other “pro business” (but anti-society) law.

The truth is that people don’t achieve success by themselves. Luck, nepotism, and cultural capital play a huge part in what most people would deem success. That’s not to say that it’s not possible to increase your serendipity surface, though.

What I like from this post by Josh Swords is that he centres agency in his “career advice” which is achieved by seeking feedback and getting better demonstrating humility and working in the open. Powerful stuff.

I’m pretty confident you only need two things. Feedback and humility, and they work best together. Feedback shows you what to work on, and humility lets you actually hear it.

So find your weakest discipline and work on that. The fastest way is to get feedback from someone you admire and then act on it. Don’t wait for the perfect plan, doing something is almost always better than doing nothing.

Find a mentor, be a mentor. Lead a project, propose one. Do the work, present it. Create spaces for others to do the same. Do whatever it takes to get better.

And do it in the open. A common mistake is assuming work speaks for itself. It rarely does.

But all of this requires maybe the most important thing of all: agency. It’s more powerful than smarts or credentials or luck. And the best part is you can literally just choose to be high-agency. High-agency people make things happen. Low-agency people wait. And if you want to progress, you can’t wait.

Source: people, ideas, machines

Image: CC BY-NC Focal Foto

"You know what this needs? Less safety testing and more venture capital!"

Illustration of a surreal office scene with neon birds interacting with digital elements around three people near servers and file cabinets; one bird writes on a digital mesh, another carries a paper.

Collected at this site are some absolutely awful uses of AI. Not only in terms of people misunderstanding what technology can and can’t do, but just really bad ideas. For example, Airbnb hosts fraudulently claiming damage via AI-doctored photos, users being able to hack McDonald’s systems by asking for the password, and Microsoft providing ‘therapy’ via LLMs for laid-off workers.

Named after Charles Darwin’s theory of natural selection, the original Darwin Awards celebrated those who “improved the gene pool by removing themselves from it” through spectacularly stupid acts. Well, guess what? Humans have evolved! We’re now so advanced that we’ve outsourced our poor decision-making to machines.

The AI Darwin Awards proudly continue this noble tradition by honouring the visionaries who looked at artificial intelligence—a technology capable of reshaping civilisation—and thought, “You know what this needs? Less safety testing and more venture capital!” These brave pioneers remind us that natural selection isn’t just for biology anymore; it’s gone digital, and it’s coming for our entire species.

Because why stop at individual acts of spectacular stupidity when you can scale them to global proportions with machine learning?

Source: AI Darwin Awards

Image: IceMing & Digit

AI and the future of education: disruptions, dilemmas and directions

Within the frames are people bound to their office cubicles; beyond them, individuals work freely from diverse locations, connected through digital signals.

A few months ago, UNESCO put a call out under the title AI and the future of education: disruptions, dilemmas and directions. I asked a few talented people I know if they would be interested in working together on a response. In the end, we submitted six separate pieces and then ran an online roundtable. You can see the results here

We published our versions as, after a couple of months, we hadn’t heard anything back from UNESCO. However, last week we heard that they hadn’t included our pieces in the publication because we’d published them. Ah well, they said that they might put together a web page showcasing what we’ve done.

I’m looking forward to reading some of the contributions, which seem to come from quite diverse sources.

This anthology explores the philosophical, ethical and pedagogical dilemmas posed by disruptive influence of AI in education. Bringing together insights from global thinkers, leaders and changemakers, the collection challenges assumptions, surfaces frictions, provokes contestation, and sparks audacious new visions for equitable human-machine co-creation.

Covering themes from dismantling outdated assessment systems to cultivating an ethics of care, the 21 think pieces in this volume take a step towards building a global commons for dialogue and action, a shared space to think together, debate across differences, and reimagine inclusive education in the age of AI.

Source: UNESCO

Image: Yutong Liu & Digit

The hysteresis effect means that practices are always liable to be objectively adjusted too late

Auto-generated description: A swirling, abstract pattern of black, white, red, and other colors creates a dynamic and chaotic visual effect.

I learned the word hysteresis only recently after having a heat pump fitted Chez Belshaw. In that context, it refers to the difference in temperature at which the system turns on and off, creating a lag in response to temperature changes. That’s a good thing in this context, as it helps prevent rapid cycling of the heat pump — making the system more stable and improving overall efficiency.

Hysteresis in other contexts can be less useful, though. A lag in response to changes can be problematic when it comes to technological change affecting certain sectors, for example knowledge workers. I don’t subscribe to Venkatesh Rao’s Contraptions newsletter, but just the part publicly available is thought-provoking.

Inevitably, those who have financial and cultural capital usually catch up and re-assert their authority and dominance. I’ve seen this in practice when universities were threatened by innovations in MOOCs and Open Badges. It was the “end of universities,” apparently. But, of course, already being in a dominant position, and thanks to the catalyst of a global pandemic, we now have unis with more online than in-person students, issuing microcredentials by the million.

I’m not saying that there won’t be ‘casualties’ and that there won’t be new ways of stratifying society. I think we’re already starting to see some of that in terms of text-based communication being a lot less dominant as a means of social communication. I’m just saying that when you’ve got a lot of financial and cultural capital, you have to do a particularly bad job to squander it entirely.

The mark-maker class lives in the gap between the signs and the systems. They comprise those who produce, coordinate, teach, and manage the functioning of symbolic order on the one hand, and those who remix, stylize, dream, and craft the cultural tones in which life is rendered bearable, on the other. These are not two classes, but one diffuse stratum, braided of institutional and aesthetic lives, of spreadsheets and metaphors.

Once insulated by the seeming security of letters and credentials, the mark-makers now find themselves in hysteresis: lagging in the face of an accelerating world that no longer waits for meaning to catch up.

[…]

Bourdieu teaches us that when the field moves faster than the habitus can adapt, a lag sets in. “The hysteresis effect,” he writes, “means that practices are always liable to be objectively adjusted too late, and that habitus tends to lag behind the changes in the field.” This lag is not benign. It is lived as confusion, disorientation, ridicule, even rage. What was once a reliable feel for the game becomes a stale reflex. The mark-makers—who for decades moved with the grain of modern institutions—now find their instincts misfiring. Their ways of knowing, their modes of speaking, their cultivated manners now appear quaint, procedural, indulgent. They are mocked by populists as condescending, and by radicals as co-opted. Worse, they are made redundant by machines that now wield, with eerie fluency, the very tools they mastered.

Source: Contraptions

Image: Logan Voss

Are we decentralised yet?

Auto-generated description: A comparison chart shows diversity and concentration metrics for Fediverse and Atmosphere platforms, displaying server counts, sizes, and Shannon Index values as of August 30, 2025.

One of the things that I see on repeat in discussions around federated social networks is how decentralised Bluesky is compared with, say, Mastodon. What I like about this site is that a) it’s visual, and b) it tries to use some kind of scientific rationale to compare the two.

So yes, while you can say that Bluesky is decentralised in theory, in practice it’s very much not. Yet, anyway.

This site currently measures the concentration of user data for active users: in the Fediverse, this data is on servers (also known as instances); in the Atmosphere, it is on the PDSes that host users' data repos. All PDSes run by the company Bluesky Social PBC are aggregated in this dataset, since they are under the control of a single entity. Similarly, mastodon.social and mastodon.online are combined as they are run by the same company.

A note about the measurement:

The Shannon Index is an entropy-based measure used in ecological studies. It is computed the same as Shannon entropy using the natural log: the negative sum over all servers of the “market share” times the log of the market share. Lower values indicate lower entropy (a high concentration of one species), while higher values indicate a more even population. In this context, the maximum value is the number of servers, which would mean that all servers have equal population.

Source: Are We Decentralized Yet?

A list of intentions; a poem; the “I want” song; not a bucket list——

I love this idea, a list of wants which, over time, accrete into a kind of “song” rather than a bucket list. Dom Corriveau (who hosts his website on an old Google Pixel 5!) stole the idea from Keenan who, in turn, got the idea from Katherine Yang. Interesting people all.

Source: Dom Corriveau

"Zurich doesn’t want to pool with Jakarta"

Two maps show predicted inundation risks for cities with sea level rises.

In addition to his Just Two Things blog, futurist Andrew Curry also writes at The Next Wave on “futures, trends, emerging issues and scenarios.” In this post which cites investment writer Joachim Klement, Curry talks about sea level rise, with 1.5m being the tipping point. This"sounds fine in theory since the base case IPCC projections are for around a one metre increase by 2100," but that “doesn’t allow for subsidence (often caused by water extraction), or the possibility of cascading climate change.”

The interesting bit for me, though, was the section he entitles “moral hazards” which relate to insurance and government intervention. Basically, the stronger you make coastal defences, the more likely people are to live there.

Hsiao has also done more specific work on Jakarta, which experiences frequent flooding. There are some interesting conclusions here, broadly that a strong government commitment to sea defences (in Jakarta’s case a sea wall) creates a moral hazard, because it

attracts coastal residents, slows inland migration, and lowers the incentives for inland development. The consequence is continued spending on coastal defense and large damages should it fail.

Insurance doesn’t work because places that don’t flood don’t want to pool with places that do (“Zurich doesn’t want to pool with Jakarta”). Alternatives that might place more of the financial burden on people who choose to live in coastal areas might work, but is open to political lobbying. And once you’ve decided to go with sea defences, and people decide to live behind them, you face political pressure to keep on strengthening the sea defences.

Source: The Next Wave

A remarkable 45% increase in solar capacity

Auto-generated description: Rows of solar panels blanket rolling hills under a clear sky, with misty mountains in the background.

While we in the UK have at least one major political party vowing to extract all of the oil and gas out of the North Sea, China has met its renewable energy goals five years early.

Again, while we have arguments about “net zero” and the aesthetics of solar farms, this summer was the hottest on record and hot homes are making children sick. Meanwhile, China is covering entire mountain ranges in solar panels.

It’s a climate emergency. We should act like it.

China broke its own renewable energy record once again in 2024, installing 80 gigawatts (GW) of wind capacity and 277 GW of solar capacity, according to the National Energy Administration, as reported by Recharge News. This marks an impressive 18% growth in wind capacity, now totalling 520 GW, and a remarkable 45% increase in solar capacity, which has reached 890 GW. Combined, these achievements fulfil the 1.2 terawatts (TW) renewable energy capacity target set by President Xi Jinping in 2020 – a goal originally intended for 2030.

[…]

The International Energy Agency (IEA) has previously pointed to China’s advancements as a key factor in keeping the global goal of tripling renewable power capacity by 2030 within reach. Despite ongoing construction of new coal-fired power plants, China’s total power generation saw a nearly 15% increase in 2024, reaching 3.35 TWh.

Source: The Renewable Energy Institute

Image: The Independent

From misdiagnosis and error to unequal access to care

This image shows two labelled swab tubes with red caps and a glass jar full of blue and white pills, floating against a grey background and refracted in different ways by a fragmented glass grid. This grid is a visual metaphor for the way that new artificial intelligence (AI) and machine learning technologies can be used to extract and analyse medical data in innovative ways. Some of the grid squares reveal graphical interpretations of the objects that exceed the capabilities of human vision, which indicates how cutting edge technologies offer ways to augment traditional human understandings of complex phenomena. A neural network diagram is overlaid, familiarising the viewer with the formal architecture of AI systems.

I am in agreement with this article in The Guardian by Charlotte Blease, a researcher at Harvard Medical School and author of the forthcoming book Dr Bot: Why Doctors Can Fail Us – and How AI Could Save Lives. Her point is that while AI might not be perfect, neither are doctors, and it can definitely speed up and help with diagnoses.

Personally, I’m now 7.5 months into trying to get a diagnosis for symptoms I started experiencing on January 15th of this year. It’s the nature of medicine to do tests and rule things out, but a combination of NHS resourcing and (some) health professionals' attitudes have made the experience sub-optimal.

For example, when presenting at A&E thinking I was having a heart attack, I had an echocardiogram followed by a doctor taking me into a side room and somewhat aggressively asking me why I was there. When a GP found out that I’m vegetarian, he suggested I start eating meat — even though it had nothing to do with the issue at hand. I’ve been waiting almost six weeks for a urine test result.

So, I’ve been supplementing the information I get from health professionals and via the NHS app with asking LLMs (usually Perplexity or Lumo) about what my test results might mean, and what else might be causing the symptoms. It’s given me a list of things to ask doctors, nurses, and those performing tests, and it’s helped me know what kinds of things I should be avoiding — other medications, supplements, food, drinks, activities, etc.

Obviously, this needs to be done with huge amounts of safeguards and guardrails in place. But not to use technology which may prove useful at scale? I think that’s irresponsible.

Given that patient care is medicine’s core purpose, the question is who, or what, is best placed to deliver it? AI may still spark suspicion, but research increasingly shows how it could help fix some of the most persistent problems and overlooked failures – from misdiagnosis and error to unequal access to care.

As patients, each of us will face at least one diagnostic error in our lifetimes. In England, conservative estimates suggest that about 5% of primary care visits result in a failure to properly diagnose, putting millions of patients in danger. In the US, diagnostic errors cause death or permanent injury to almost 800,000 people annually. Misdiagnosis is a greater risk if you’re among the one in 10 people worldwide with a rare disease.

[…]

Medical knowledge also moves faster than doctors can keep up. By graduation, half of what medical students learn is already outdated. It takes an average of 17 years for research to reach clinical practice, and with a new biomedical article published every 39 seconds, even skimming the abstracts would take about 22 hours a day. There are more than 7,000 rare diseases, with 250 more identified each year.

In contrast, AI devours medical data at lightning speed, 24/7, with no sleep and no bathroom breaks. Where doctors vary in unwanted ways, AI is consistent. And while these tools make errors too, it would be churlish to deny how impressive the latest models are, with some studies showing they vastly outperform human doctors in clinical reasoning, including for complex medical conditions.

Source: The Guardian

Image: Alan Warburton, Better Images of AI

💥 Thought Shrapnel: 31st August 2025

This is the last week of Thought Shrapnel being in “low-power mode” over the summer. Please find 10 interesting things with minimal commentary ☀️

Auto-generated description: A hand is holding a soft-serve ice cream cone with a chocolate flake in front of an ice cream truck.

Image CC BY-NC-ND su-lin

  • 99 Problems: The Ice Cream Truck’s Surprising History (Longreads) — “The Glasgow Ice Cream Van Wars sound like they had all the elements of a cozy crime novel, but the reality was very different. The city’s Serious Crime Squad was dispatched to investigate and stop the fighting, but quickly became the object of derision, renamed by the locals as the Serious Chimes Squad, thanks to their inability to apprehend the perpetrators.”
  • We must fight age verification with all we have (User Mag) — “Age verification, like book bans and obscenity laws, will not be narrowly used to prevent access to pornography–it will be used to “legislate morality,” control access to information and limit people’s freedom to self-manage their health and family well-being according to their own morals.”
  • A new personality type - ‘Otrovert’ - is here to make life even more confusing (GQ) — “‘Otrovert’ – coined this year by New York psychiatrist Dr. Rami Kaminski – describes people who are averse to communing with groups, a bit like Groucho Marx who famously refused to join a club that would have him as a member (it seems the term “Marxist” was already taken). Other characteristics of otroverts include being an ‘original thinker’, ‘valuing deep connections’ and preferring ‘authenticity over conformity. If that sounds relatable, why not join the club? Erm. Maybe not.”
  • The ROI of exercise (Herman’s blog) — “It’s well understood that a good exercise routine is a mixture of strength, mobility, and cardio; and is performed at a decent intensity for 2-4 days a week for at least 45 minutes…. This totals about 3 hours a week, or 156 hours per year. If we extrapolate that over an adult lifetime, that’s about 8,500 hours of exercise, or about a year of solid physical activity…[O]ver a lifetime, one full year of exercise leads to 10 full years of extra life. That’s a 1:10 return on investment! So even without any of the additional benefits… this is still one of the best investments you can make.”
  • Are Marathon Runners More Likely to Get Cancer? (VICE) — “It all started when Dr. Timothy Cannon, an oncologist at Inova Schar Cancer Institute, noticed something off. Three of his patients, all under 40, were super-fit endurance athletes. They didn’t drink. They didn’t smoke. One was vegan. Yet all had advanced colon cancer, and none of the usual risk factors. So, he turned this mystery into a research study.”
  • Writing with LLM is not a shame. An essay about transparency on AI use. (reflexions) — “I think we fell in a ethical fallacy, in a way, with the emergence of a new tech. Even if we compare LLM with other techs, we do not have the same ethics requirements against LLM. Some might say we have to because ELM can generate much more than previous techs but we are falling again in the same reasoning trap.”
  • Britain leads the world in a new global business—a criminal one (The Economist) — “Britain accounts for 40% of phone thefts in Europe… British thieves’ favoured method is to approach from behind on an electric bike, grab an unlocked phone and put it in a “Faraday bag” to prevent tracking; most of the nicked phones end up in China. Meanwhile, around 130,000 cars were stolen in Britain last year, a rise of 75% in a decade. SUVs are popular targets, for export to the Gulf and Africa, where they can handle poor roads.”
  • We Are Still Unable to Secure LLMs from Malicious Inputs (Schneier on Security) — “This kind of thing should make everybody stop and really think before deploying any AI agents. We simply don’t know to defend against these attacks. We have zero agentic AI systems that are secure against these attacks. Any AI that is working in an adversarial environment—and by this I mean that it may encounter untrusted training data or input—is vulnerable to prompt injection. It’s an existential problem that, near as I can tell, most people developing these technologies are just pretending isn’t there.”
  • Reform and the UK press (mainly macro) — “The recent coverage of immigration and asylum in the right wing press has been almost apocalyptic. They have been hyping small demonstrations as if they were indicators of impending national unrest, and the broadcast media has largely followed their lead. The recent celebration by the Mail, Sun and Telegraph of someone who pleaded guilty to inciting racial hatred makes “Hurrah for the Blackshirts” sound rather tame. We have reached the point where a majority of the print media are in effect encouraging civil unrest and racial hatred, yet thanks to political short termism this press remains essentially unaccountable for their behaviour.”
  • The great university rip-off (The New World) — “Among dozens of my friends leaving university this year, only three are heading straight into employment. The rest are either going on to further study or taking a gap year before doing so – in their own words, to avoid the inevitable hellscape of the grad job market. For those of us who have braved the post-university job application humiliation ritual, we know we should be grateful to even get a rejection letter, and to try not to have a breakdown when our parents send us 8,000 articles about graduates on universal credit or AI replacing interns.”

👋 See you next week!

– Doug

💥 Thought Shrapnel: 24th August 2025

Thought Shrapnel is in “low-power mode” over the summer. Please find 10 interesting things with minimal commentary ☀️

Auto-generated description: A scatter plot displays generative AI apps with axes showing average daily time spent per user versus average daily active users, where bubble sizes represent app revenue.
  • 4o-4 Not Found (thejaymo) — “Looking closer, 72% of Character.AI’s users are female. Which suggests the rug-pull of 4o more widely may be less a sad incel AI girlfriend story and more an AI boyfriend apocalypse.
  • Review of Anti-Aging Drugs (Aging Matters) — “My favorites from this list are Melatonin, Berberine, NAC, Rapamycin, and Selegiline. I can recommend the first three unequivocally. Rapamycin has down sides that you should consider, and Selegiline has effects on mood and energy that you may like or dislike. Personally, I take a variety of anti-inflammatory supplements, and I’m glad to have an excuse to eat dark chocolate.”
  • Digital Sovereignty Index (Nextcloud) — “Whether it’s about protecting sensitive data, avoiding vendor lock-in or ensuring democratic control over infrastructure, the debate around digital sovereignty is gaining momentum. But how sovereign is a country’s digital infrastructure in practice?”
  • Porn censorship is going to destroy the entire internet (Mashable) — “The stated reason behind these laws is to “protect children.” But as journalist Taylor Lorenz pointed out, in the UK, age verification is already preventing children from accessing vital information, such as about menstruation and sexual assault.”
  • Sunny Days Are Warm: Why LinkedIn Rewards Mediocrity (Elliot C Smith) — “The vast majority of it falls into Toxic Mediocrity. It’s soft, warm and hard to publicly call out but if you’re not deep in the bubble it reads like nonsense. Unlike it’s cousins ‘Toxic Positivity’ and ‘Toxic Masculinity’ it isn’t as immediately obvious. It’s content that spins itself as meaningful and insightful while providing very little of either. Underneath the one hundred and fifty words is, well, nothing. It’s a post that lets you know that sunny days are warm or its better not to be a total psychopath. What is anyone supposed to learn from that.”
  • The End of Handwriting (WIRED) — “But if kids always have access to devices, does it really matter whether they can write with their hands? Yes and no. If the past few years of digital nomad work and vibe coding have taught us anything it’s that, professionally, handwriting may not be all that necessary in a lot of fields. The problem is that learning handwriting might be necessary to learn everything else. “We don’t yet know what we are losing in terms of literacy acquisition by de-emphasizing handwriting fluency,” Ray says.”
  • ‘A climate of unparalleled malevolence’: are we on our way to the sixth major mass extinction? (The Guardian) — “It turns out that there are only a few known ways, demonstrated in the entire geologic history of the Earth, to liberate gigatons of carbon from the planet’s crust into the atmosphere. There are your once-every-50m-years-or-so spasms of large igneous province volcanism, on the one hand, and industrial capitalism, which, as far as we know, has only happened once, on the other.”
  • Why I’m all-in on Zen Browser (Ben Werdmuller) — “So I was pleased to rediscover Zen Browser, which has improved in leaps and bounds since I last tried it. It has a very Arc-inspired UI that gets out of your face quickly, with all the customization and keyboard shortcuts you’d expect from something built on top of Firefox. I use vertical tabs in a sidebar that auto-hides, and I can navigate just as smoothly as I ever did with Arc.”
  • Changes Coming to Higher Ed (Hybrid Horizons) — “Some may not land as written; timelines slip, context matters, contexts change, things shift and people can surprise us. Still, read this as a calm, plain-spoken brief about potential shifts coming in the sector.”
  • The circular economy could make demolition a thing of the past – here’s how (The Conversation) — “This paradigm shift – from a single-use mindset to one of “reduce, reuse, recycle” – is already common in other fields. It is now starting to take hold in construction through various global initiatives that seek to integrate these concepts into safer, more sustainable and more durable buildings. They show how this can be achieved through conscious design, based on concepts such as modularity and standardisation.”

👋 See you next week!

– Doug

💥 Thought Shrapnel: 17th August 2025

Thought Shrapnel is in “low-power mode” over the summer, sharing 10 interesting things with minimal commentary ☀️

Auto-generated description: A collage humorously combines the faces of famous figures with those of other well-known personalities through drawn-on facial features and expressions.
  • Witty wotty dashes (Aeon) — “Because of its radical openness to difference, the doodle tends to function as a kind of meta-aesthetic attuned to containing a network of ambivalent affects and fleeting everyday aesthetic experiences that become increasingly common in the 20th century.”
  • A moment that changed me: I resolved to reduce my screen time – and it was a big mistake (The Guardian) — “[R]educing my screen time had become its own form of phone addiction. Rather than escaping the need to seek validation from strangers online, I had happened upon a new way to earn their approval.”
  • Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens. (The New York Times) — “Sycophancy, in which chatbots agree with and excessively praise users, is a trait they’ve manifested partly because their training involves human beings rating their responses.”
  • How to not build the Torment Nexus (Mike Monteiro’s Good News) — “As industries mature, they tend to get a little boring. And as industries age, and start seeing their own collapse over the horizon, they tend to get… defensive. Bitter. Conservative… Tech, which has always made progress in astounding leaps and bounds, is just speedrunning the cycle faster than any industry we’ve seen before.”
  • System Font Stack — “Webfonts were great when most computers only had a handful of good fonts pre-installed. Thanks to font creation and buying by Apple, Microsoft, Google, and other folks, most computers have good—no, great—fonts installed, and they’re a great option if you want to not load a separate font.”
  • Signal boss: ‘disturbing’ laws show the UK doesn’t understand tech (The Times) — “She says that one of the “most pernicious and alarming” problems is that if a company accepts a “technical capability notice”, it is prohibited from informing users. The upshot: “We don’t know [if any] other company has received one of those notices and responded by rolling over.” Whittaker says Signal has not received one, but that the company would sooner “leave” the UK than comply.”
  • Welcome to the Cosmopolis (Contraptions) — “In brief, new technologies induce new normals through protocolization of what is initially a weird and scary sort of monstrousness irrupting across a frontier. Beyond that frontier lies a new kind of territory, a new kind of “soil” on which societies can be built.”
  • Funding Open Source like public infrastructure (Dries Buytaert) — “Governments already maintain roads, bridges, and utilities, infrastructure that is essential but not always profitable or exciting for the private sector. Digital infrastructure deserves the same treatment. Public investment can keep these core systems healthy, while innovation and feature direction remain in the hands of the communities and companies that know the technology best.”
  • “Privacy preserving age verification” is bullshit (Pluralistic) — “NERD HARDER! is the answer every time a politician gets a technological idée-fixe about how to solve a social problem by creating a technology that can’t exist. It’s the answer that EU politicians who backed the catastrophic proposal to require copyright filters for all user-generated content came up with, when faced with objections that these filters would block billions of legitimate acts of speech.”
  • The Logic of the ‘9 to 5’ Is Creeping Into the Rest of the Day (The Atlantic) — “One way to look at 5-to-9 videos is as the product of people trying to make the most of the leisure time they have… But in attempting to take control back from their jobs, many 5-to-9 video creators end up reproducing a version of the thing they are trying to distance themselves from. If you clock out, go home, and continue checking things off a list, you haven’t really left the values of work behind.”

👋 See you next week!

– Doug

💥 Thought Shrapnel: 10th August 2025

That’s right, Thought Shrapnel continues in “low-power mode” over the summer, so I’ll continue to share 10 interesting things but provide minimal commentary ☀️

Auto-generated description: Two children are having fun communicating with a tin can telephone while sitting on a bamboo fence in a lush green field.
  • Human speech may have a universal transmission rate: 39 bits per second (Science) — As someone who has enjoyed and endured many conference presentations in my time, this is very interesting to me. “No matter how fast or slow, how simple or complex, each language gravitated toward an average rate of 39.15 bits per second.”
  • Tuition fees are rising again and nobody is happy – it’s time to actually fix our broken university sector (The Guardian) — A short, well-written piece by Zoe Williams about the parlous state of Higher Education. Depending on his results next week, hopefully my son is heading off to a university which will still exist in three years' time…
  • Marking the Government’s homework on public sector AI (imperfect offerings) — Helen Beetham with a lengthy analysis of what’s going on in the UK with relation to AI, which is possibly best summarised by her withering put down of the memoranda of understanding the government has signed with Google and Microsoft, respectively, as “a cute name for a declining world power signing its assets over to the new ones.” Mexit, not Brexit, is the new priority for the UK (The Register) — Related to the above, although to do with Microsoft licenses rather than AI, Rupert Goodwins notes that giving £9 billion over 5 years means that “Microsoft gets one pound of every 13 spent” by the UK government on digital technology.
  • Didn’t Take Long To Reveal The UK’s Online Safety Act Is Exactly The Privacy-Crushing Failure Everyone Warned About (TechDirt) — 5 of the top 10 apps in the Apple store are VPNs, and “Yes, you read that right. A law supposedly designed to protect children now requires victims of sexual assault to submit government IDs to access support communities.” Slow claps all round.
  • In the Future All Food Will Be Cooked in a Microwave, and if You Can’t Deal With That Then You Need to Get Out of the Kitchen (Random Thoughts) — Colin Cornaby with an absolutely on-point parody of the AI bubble. “I saw online another restaurant owner suggested deploying one thousand microwaves for each chef. This sounds like a great idea.”
  • The Sunday Morning Post: Why Exercise Is a Miracle Drug (Derek Thompson) — I can’t do proper cardio at the moment, but I’m trying to get a long walk in every day and, six days out of seven, I’m lifting weights. Exercise is important: “To a best approximation, aerobic fitness and weight-training seem to increase our metabolism, improve mitochondrial function, fortify our immune system, reduce inflammation, improve tissue-specific adaptations, and protect against disease.”
  • Face it: you’re a crazy person (Experimental History) — I love the way that Adam Mastroianni writes, and this post is a great example of why. “There’s no amount of willpower that can carry you through a lifetime of Tuesday afternoons. Whatever you’re supposed to be doing in those hours, you’d better want to do it.”
  • You can’t fight enshittification (Pluralistic) — A little bit pessimistic, but nevertheless true from Cory Doctorow: “Enshittification is not the result of people making bad choices: it’s the result of bad policies that produce bad systems… When all your friends are going to a festival, are you really going to opt out because the event requires you to use the Ticketmaster app (because Ticketmaster has a monopoly over event ticketing)? If so, you’re not gonna have a lot of friends, which is a pretty shitty way to live.”
  • Do we need the wealthy? (Funding the Future) — Richard Taylor, Emeritus Professor of Accounting at the University of Sheffield’s Management School, with a well-argued video (with transcript) of why we shouldn’t be particularly bothered if rich people threaten to quit the UK due to a higher tax burden.

👋 Until next week!

– Doug

💥 Thought Shrapnel: 3rd August 2025

Thought Shrapnel is in “low-power mode” over the summer, sharing 10 interesting things with minimal commentary ☀️

Illustration depicting the world as bomb and the hand of a billionaire going to light the fuse with a match

👋 That’s it until next week!

– Doug

💥 Thought Shrapnel: 27th July 2025

A reminder that Thought Shrapnel is in “low-power mode” over the summer. Here are 10 interesting things with minimal commentary ☀️

Auto-generated description: A shopping cart filled with items is depicted as part of a barcode on an orange background.

👋 That’s it until next week! Check out my latest week note if you’re still looking for something to read.

– Doug

💥 Thought Shrapnel: 20th July 2025

Thought Shrapnel continues in low-power mode over the summer: 10 interesting things each week with minimal commentary ☀️

Auto-generated description: Various types of deadlines are creatively illustrated through different arrangements of dashed lines, each labeled with a different descriptor such as unexpected, impossible, and phantom.
  • Timelines, deadlines and lifelines (Temporal Imagination) — The above image, which I think is fantastic, comes from a workshop by Keri Facer and Harriet Hand and “explores some of the habits we have of thinking with time that are inherited and powerful.” You can purchase a limited-edition, hand-pressed portrait version of the above image at The Department of Small Works.
  • Sixteen and 17-year-olds will be able to vote in next general election (Sky News) — It was in their manifesto, but you never know with this Labour government. It means my daughter, who will be 17 at the time of the next UK General Election, will be able to vote!
  • Treating beef like coal would make a big dent in greenhouse-gas emissions (The Economist) — If you care about the climate emergency and want to do something about it, you should stop eating beef (and preferably all meat).
  • The new solar: what colour panel would you like? (The Reengineer) — After having a heat pump installed last month, we’re getting (standard) solar panels in a few weeks' time. These coloured solar panels, though, look very cool and are likely to help with uptake.
  • A first-party data reality check (OLDaily) — I agree with Stephen Downes' take on advertising here. I hate it, and believe its pervasiveness in society to be antithetical to human flourishing.
  • How culture is made (Metalabel) - I love this from Yancey Strickler: “A metalabel is a release club where groups of people who share the same interests drop and support work that reflects their point of view.” I am absolutely up for this approach, especially after reading the Adam Curtis quote he cites.
  • ‘The perfect accompaniment to life’: why is a 12th-century nun the hottest name in experimental music? (The Guardian) — I used an illustration by Hildegard of Bingen, 12th century nun, to illustrate a post about migraines last year. What I didn’t know was how influential her music was, and how much of her creative outpouring happened in her forties. Inspiring.
  • There is No Meritocracy Without Lottocracy (Assemble America) — Given how much of life is down to serendipity and chance (including where and to whom you were born) I think we should lean into this a bit: “With random selection, no action or investment can meaningfully improve one’s chances, rendering efforts to manipulate the system worthless. This nullifies political capital and ensures that authority is not seized by those adept merely at influencing outcomes through charm, money, or connections.”
  • How GLP-1s Are Breaking Life Insurance (GLP-1 Digest) — It looks like the class of drugs better known by brand names such as Ozempic and Mounjaro are likely to have the same kind of effect as statins on the general population.
  • How much does your road weigh? (The Architectural Review) — “Today, there is an average of 37 tonnes of road per inhabitant of the planet. The weight of the road network alone accounts for a third of all construction worldwide, and has grown exponentially in the 20th century. There is 10 times more bitumen, in mass, than there are living animals. Yet growth in the mass of roads does not automatically correspond to population growth, or translate into increased length of road networks.” 🤯

👋 Until next week! I’m still around so feel free to comment / hit reply on this, and let me know what you enjoyed reading.

– Doug

💥 Thought Shrapnel: 13th July 2025

A reminder that Thought Shrapnel is in low-power mode over the summer. I’m continuing share 10 things each week — but in a single post, with a tiny bit of commentary ☀️

Auto-generated description: A solitary tree stands against a backdrop of swirling, colorful star trails.
  • ZWO Astronomy Photographer of the Year 2025 shortlist (RMG) — The above image is entitled “Dragon Tree Trails” by Benjamin Barakat and it’s made up of 300 individual exposures. It’s one of a number of stunning issues on this shortlist.
  • Artificial intelligence is the opposite of education (imperfect offerings) — Helen Beetham goes full hardcore mode against AI (by which I understand she means the sociotechnical system around generative AI). I hope she releases the podcast episode I recorded with her a few months ago soon!
  • AI Can’t Take Over Soon Enough For Me. (Roving Dynamics Ltd) — Literally the opposite view to Helen, although not as eloquently stated. I’m somewhere in the middle of the two, ever since reading Fully Automated Luxury Communism.
  • Jack Dorsey made an encrypted Bluetooth messaging app (The Verge) — In many ways, Bluetooth mesh messaging is nothing new (see Briar). However, bitchat adds end-to-end encryption, message encryption, battery optimisation, and other useful features. It’s already been ported to Android.
  • 4.6 Billion Years On, the Sun Is Having a Moment (The New Yorker) — In a few weeks' time we’re getting as many solar panels installed Chez Belshaw as possible. This article explains why.
  • The Mask-Off Moment for Digital Identity _(The New Design Congress) — This is the foreword to an upcoming report which sounds like it’s going to be dynamite. I work in the area of digital credentials, mainly to do with recognition but there is obviously quite a large digital identity component to all of that…
  • Junior Roles Aren’t Going Away (In The AIrena) — I’d ordinarily have a lot to say about this but, largely, I agree with it. What you get out of generative AI is largely what you put into it. And that includes clarity of thought/intention.
  • Stop making me log in to everything (Embedded) — I mean, the irony of having to login to read all of this article, but again I’ve thought a lot about this. Especially when AI companies are hoovering up the open web.
  • You’re Not Wasting Time, You’re Wasting Your Life (Part 1) (Part 2) (The Daily Stoic) — Listen to both parts for context, as this conversation between Ryan Holiday and Rutger Bregman is fascinating. Part 2 is gold: Bregman talks about his book Moral Ambition and how the quotation attributed to Margaret Mead is absolutely true.
  • No (Poetry Foundation) I’m not sure where I came across this article by Anne Boyer from 2017 but it is magnificent: “History is full of people who just didn’t. They said no thank you, turned away, ran away to the desert, stood on the streets in rags, lived in barrels, burned down their own houses, walked barefoot through town, killed their rapists, pushed away dinner, meditated into the light.”

👋 There we go! I’m still on the other end of the internet, so feel free to comment / hit reply on this, and let me know what resonated.

– Doug

💥 Thought Shrapnel: 6th July 2025

Thought Shrapnel is going into low-power mode for a few weeks over the summer. Instead of posting nothing at all, I’ve decided to continue to share 10 things each week — but in a single post, with minimal commentary ☀️



👋 That’s it! Have a good week. I’m still around if you comment / hit reply on this, so let me know what resonates.

– Doug

Unfortunately, a further escalation of the already dismal curtailing of academic freedom in the US appears to be likely.

Auto-generated description: A screen displays repeated error messages indicating failure to load resources.

Most people know about the Internet Archive and its role in preserving the history of the web. Less well-known are archives such as Anna’s Archive and other ‘shadow libraries’. Yes, you can use shadow libraries to pirate books, but as it proudly states, it’s “The largest truly open library in human history.”

TIB — Technische Informationsbibliothek, or the Leibniz Information Centre for Science and Technology is Germany’s national library for engineering, technology, and the natural sciences. They’ve created a ‘dark archive’ of arXiv, which is a freely accessible online archive for scientfic preprints, i.e. publications of scientific works that have not yet (fully) been peer-reviewed. These pre-prints are important for researchers accessing the latest research results.

They are explicitly doing this due to the situation in the USA at the moment, which is a good reminder to us all that the way the world used to be is not the way it is now. We should both update our mental models of how things work, and act to protect the things we hold dear.

(Interestingly, the authors note that, until fairly recently, there were mirrors held elsewhere, but the advent of fast Content Delivery Networks (CDNs) meant that mirrors felt like an ‘overhead’ and ‘inefficient. It’s a good reminder that using so-called cloud-based services simply means having your data on somebody else’s computer…)

Research and science are international, hence we are speaking of international scientific communities. A service such as arXiv might be operated by a US-based institution, Cornell University, but arXiv is being used by researchers worldwide, as, e.g., impressively evidenced by the submission statistics. Moreover, since the introduction of arXiv Membership in 2010, the funding of arXiv has been partially internationalised. TIB funds the German contribution, together with the Helmholtz Associaton of German Research Centres (HGF) und the Max Planck Society (MPG).

So when the Trump administration makes decisions that have fatal consequences for science and research in the US, the repercussions reach far beyond the Gulf of Mexico: Over the last days, reports are mounting in German media that attest to researchers not only fearing the loss of data , but also the loss of established information portals such as PubMed.

[…]

Unfortunately, a further escalation of the already dismal curtailing of academic freedom in the US appears to be likely. Not at least due to the great importance of US institutions in the international academic system, these developments affect research infrastructures worldwide. As ”Safeguarding Research and Culture” are writing in their mission statement, this warrants a change of mind, among other things towards more decentralised and thus more resilient infrastructures.

It’s worth noting that the current mirror / backup isn’t public:

The data are being stored, but if push comes to shove it would need some more steps to make them publicly available. Because a database service is much more than a mere backup copy of the data: Operating a productive user-facing service not only needs technical resources, but first and foremost a committed team which in the background takes care of diverse aspects such as quality assurance, content curation, or (technical) development.

Source: TIB blog

Image: David Pupăză

Free, customisable exemplar badges to support consistent, credible recognition of skills and learning across the UK.

Auto-generated description: A geometric pattern composed of various colored triangles including blue, red, yellow, black, and white.

I’m loath to be critical of efforts to encourage the use of badging in the UK, but this guide from the Digital Badge Commission is partying like it’s 2019 🫤

The response to my criticism will, no doubt, be that they’re trying to keep things “simple”. Having worked in many of the sectors targeted by these exemplar badges I think the examples are both out of date and, well… just not useful.

What do I mean?

  • Schools: “Responsive student” badge which is essentially rewarding compliance.
  • Higher Education: “Law Clinic Volunteer” badge which apparently aligns with the “Staying Positive” part of the Skills Builder framework(?)
  • Vocational Skills: “Health and Safety Practitioner (ISO 45001:2018)” badge which is the kind of thing that the BSI should be endorsing.

It’s all somewhat disappointing, especially as the point of Open Badges, as outlined in the 2012 Mozilla white paper, was to empower learners. This seems to be at odds with this set of exemplar badges.

I’ve also got lots of opinions about the talk of the need for ‘consistency’ going back to this post I wrote back in 2012(!) about what people mean when they talk about “rigour.”

I’ve just been helping facilitate the Digital Credentials Consortium Summit over in The Netherlands which was a really forward-thinking space. The Open Badges standard is at v3, and aligns with the Verifiable Credentials data model. The Digital Badging Commission’s resources always feel behind the curve. Where’s the discussion of badge images being optional? Of digital wallets? Of the metadata fields introduced via VC-EDU? Sigh.

If you need a discussion based on up-to-date information and relevant examples, you know where I am.

The Digital Badging Commission has launched a suite of free, customisable exemplar badges to support consistent, credible recognition of skills and learning across the UK.

Developed in partnership with practitioners from education, employment, and community sectors, the 12 templates show how digital badges can be used in real-world settings – from schools and colleges to volunteering, the arts and the workplace.

Source: Digital Badging Commission

Image: George Pagan III

People contribute in their free time. Gratitude is the least we can offer.

Auto-generated description: An illustration depicts concepts of participation including clear mission, invitation to participate, easy onboarding, modular approach, strong leadership, open ways of working, backchannels and watercoolers, and celebration of milestones.

I’m sharing this resource primarily to bookmark it for future reference. It’s a fantastic introductory guide for those running Open Source projects on how to get more contributors — and keep them coming back.

Along with a clear mission, easy onboarding, and a modular approach (all parts of WAO’s Architecture of Participation shown above) the guide also talks about creating a place of generosity. I couldn’t agree more. There are lots of reasons why people contribute to Open Source software but making them feel good and valuable is always important.

Be kind, friendly, and grateful. People contribute in their free time. Gratitude is the least we can offer. Kindness helps create a welcoming and respectful space. It also makes discussions and disagreements much smoother. Ideally, you want the community to be able to manage the project without needing you all the time. That starts with setting the tone.

Be as responsive as you can, but don’t expect contributors to match your speed. Some will only have time to work on weekends or late at night. If you also maintain your project in your spare time, you’ll have similar expectations. It’s okay that some PRs will take weeks (with some back and forth) before they can be merged. That’s just how open-source works sometimes.

Be ready to help. At first, you might feel like you’re losing time (“If I did it myself, it would be faster”), but that’s short-term thinking. In the medium term, if you build the right environment, people will come back, and you’ll get those valuable recurring contributors that make a project healthy.

Source: curqui-blog

Keeping bedroom sound levels beneath the low-60s dB is a pivotal target for preserving restorative sleep stages

Auto-generated description: Graphs illustrate the impact of noise levels in decibels on various sleep parameters, including REM sleep, deep sleep, sleep duration, sleeping heart rate, heart rate variability (HRV), and an empirical health sleep score.

As someone whose sleep quality seemed to decline from my mid-thirties, I can understand the basis for this study. It doesn’t say how many people formed the ‘panel’ whose data was collected, but they found that sleep quality is affected over a certain decibel threshold.

The reason I’m sharing this is because about a year ago I bought a pair of sleep buds (I’ve actually got the discontinued Anker Soundcore A10’s) and it’s changed my life.

My understanding is that the noise it plays desensitises you to small noises outside that would otherwise wake you up. I’ve got mine set to turn off when I’ve been asleep for 30 minutes. So if, for example, you can’t keep bedroom sound levels “beneath the low-60s dB” all night, every night, then I’d encourage you to try some sleep buds.

Our panel shows a steady, almost linear erosion of REM minutes from the low-40 dB range through the mid-50s. Once noise climbs past roughly 58–60 dB (about the loudness of normal conversation), the line kinks sharply downward—REM time falls by an additional ~15 minutes in that single step. Deep-sleep minutes follow a similar pattern, holding fairly steady below 50 dB, slipping modestly through the low-50s, then dropping 6–7 minutes once the room crosses that same upper-50s threshold. Taken together, the curves suggest a threshold effect near 60 dB where restorative stages start to collapse rather than decline gradually.

[…]

The red dashed lines mark the single largest step-down for each series, and most of them cluster in a narrow band around 55–60 dB. Below that range, incremental quieting still buys modest gains, but above it the penalties accelerate: REM and deep minutes shrink by roughly a quarter, total sleep contract by about an hour, HR climbs by several beats per minute, and HRV flattens out.

Practically, that suggests a threshold effect: keeping bedroom sound levels beneath the low-60s dB (roughly the volume of normal conversation) is a pivotal target for preserving restorative sleep stages and the physiologic calm that goes with them.

Source: Empirical

People who can tolerate uncomfortable silences are typically better listeners

Auto-generated description: A group of people at a social gathering are mingling and conversing, with one person humorously noting their tendency to steer discussions toward their own expertise.

If you identify as male and are reading this, the chances are that you already know that you talk too much, or that you will learn that you do at some point in the future. This article, is therefore for you.

I still have a long way to go: although I don’t mean to, I interrupt people (especially women) and generally try and tell other people what is in my head. But I’m trying and I’m getting better at all of this. I also try and either explicitly or implicitly point out some of this to other men, while including women in conversations more.

At the end of the day, it’s about being interested in other people, not feeling like every silence has to be filled, and a room of n people, trying to ensure that you speak 1/nth of the time.

Men in public spaces, according to research, talk more than women, talk over women, and talk down to women, contributing to the rise of gender neologisms such as manologuing, bropropriating and mansplaining. So, aware that men tend to dominate and disrupt, aware that the world at large feels unbearably loud, aware that I, too, often add to that noise, I decided to learn to keep my mouth shut – starting in the general hellscape of social media.

[…]

I once live-tweeted my experience reading War and Peace just to show that I was the sort of person who read War and Peace. Life events fell victim to the social media lens. I could not simply enjoy Christmas or birthdays: I framed events in odd ways, repurposed them in pursuit of dopamine. “Books, booze and cherry blossoms,” I once tweeted, after workshopping the image and tagline with my partner on our anniversary. Nothing was sacred, nothing real, everything permitted.

[…]

Talking less in real life proved a tougher ordeal. My family are rough around the edges, my friends are on the wrong side of unruly: the people I love seldom get to finish sentences. I have often felt that my overtalking relied on the desire to be heard, a Darwinian survival of the loudest. But communication coach Weirong Li told me that the compulsion to talk often stems from the desire to escape silence. “Most people speak to avoid discomfort – not because they have something essential to say.” That rang true: the urge to avoid awkward silences has always felt urgent.

[…]

People who can tolerate uncomfortable silences are typically better listeners. Studies show that embracing awkward silences improves emotional self-regulation, fosters empathy and builds trust between conversational partners. I asked a friend, Makomborero Kasipo, a writer and registrar in psychiatry, to characterise my overtalking. “You talk to fill silence,” she said. “You express yourself, which is good, but then feel the need to defend what you have just expressed, then defend that against an imagined response, then apologise for talking too much, then apologise for apologising.” Mako offered advice: learn to feel comfortable in silence. “Develop the skill to let silence breathe.”

[…]

Talking less is not just about limiting the compulsion to talk. It’s also about changing the ways in which we converse. One of my main problems, according to my partner and any logical observer, was conversational narcissism: the art of bringing every discussion back to me. I’m very good at it. Most of us are. Sociologist Charles Derber recorded more than 100 dinner conversations and found two types of reaction: the support and the shift response. The support response is the lovely one, the bread and butter of therapists, the one that builds on the initial talker’s points and draws further discussion. A New Yorker cartoon depicts the bad response, the more common response, the shift response, as a man in a blazer at a dinner party says, “Behold, as I guide our conversation to my narrow area of expertise.”

[…]

Practise listening and you’ll stop talking. Active listening has become a buzzword, abused by droves of middle managers, corporate gurus and lifestyle coaches. Listening, to them, depends on the right sort of nod, mirrored questions and choreographed body language, always in pursuit of a goal: to make a sale, gain a promotion, secure a date, and so on. It is listening as performance. Emphasis remains on the outcome, not the process. But active listening, in its initial form, focuses on the talker. It is a skill that demands full attention, use of all the senses, the removal of obstacles to comprehension, and excavating meaning below intent.

Source: The Guardian

Image: The New Yorker

Is CC Signals the new robots.txt?

CC Signals icons

As Stephen Downes notes, Creative Commons (CC) has announced a new framework for signalling preferences around AI training. Building on an IETF draft standard, the idea is that creators can use machine-readable tags to express whether their work should be used for training AI, and if so, under what conditions.

Echoing the existing CC licenses, the ‘conditions’ might be providing credit to others, a financial contribution, or open-sourcing the resulting AI models. This all sounds well-intentioned, and you’d think I’d be in support of it. I’m a CC Fellow, after all. But instead, it feels out of touch with the reality of how large AI companies operate.

Stephen has already pointed out that this approach creates a pretend layer of control as CC Signals are not legally binding — and those with the most to gain from ignoring them are the least likely to pay attention. Our collective experience with robots.txt should probably serve as a warning for this: respect for voluntary signals only lasts as long as it suits the interests of those scraping the content.

Creative Commons licenses were created a couple of decades ago to allow creators to share their work openly and freely. CC Signals is attempting to protect creators, which is admirable, but makes the relationship transactional and so shifts CC away from its roots in open sharing. As the framework lacks any real mechanism for enforcement, there’s little to stop powerful actors from simply disregarding these preferences. It’s shutting the barn door after the horse has bolted.

TL;DR: technical standards are useful, but without legal backing or industry buy-in, this is little more than a polite request. I think the commons deserves more than a new version of robots.txt that can be ignored at scale.

Since the inception of CC, there have been two sides to the licenses. There’s the legal side, which describes in explicit and legally sound terms, what rights are granted for a particular item. But, equally there’s the social side, which is communicated when someone applies the CC icons. The icon acts as identification, a badge, a symbol that we are in this together, and that’s why we are sharing. Whether it’s scientific research, educational materials, or poetry, when it’s marked with a CC license it’s also accompanied by a social agreement which is anchored in reciprocity. This is for all of us.

[…]

Reciprocity in the age of AI means fostering a mutually beneficial relationship between creators/data stewards and AI model builders. For AI model builders who disproportionately benefit from the commons, reciprocity is a way of giving back to the commons that is community and context specific.

(And in case it wasn’t already clear, this piece isn’t about policy or laws, but about centering people).

Source: Creative Commons

"The music is one thing, but the message is a big part of why we’re getting across."

Promo photo of the band KNEECAP

I’ve been listening to KNEECAP for the last couple of years since hearing one of their tracks on BBC 6 Music. And I really enjoyed the lightly fictionalised self-titled film of their origin story. But the best thing about them, I think, is how unashamedly political they are.

This has managed to get the trio (DJ Próvai, Mo Chara and Móglaí Bap) into a spot of bother, especially in the political climate where pointing out that Israel is carrying out a genocide against innocent Palestinians is, apparently, a controversial statement?

It’s always worth looking at who the establishment try to proscribe. Sometimes it’s because they’re speaking truth to power.

Israel has been carrying out a full-scale military campaign on occupied Gaza for almost two years, an onslaught triggered by Hamas’s 7 October 2023 attack on southern Israel, in which about 1,200 people were killed. The UN has found Israel’s military actions to be consistent with genocide, while Amnesty International and others have claimed Israel has shown an “intent to destroy” the Palestinian people. At least 56,000 Palestinians are now missing or dead, with studies at Yale and other universities suggesting the official tolls are being underestimated. (In July 2024, the Lancet medical journal estimated the true death toll at that point could be more than 186,000.) But away from Kneecap and other outspoken artists, across the creative industries as a whole relatively few have spoken about Gaza in such stark terms.

“The genocide in Palestine is a big reason we’re getting such big crowds at our gigs, because we are willing to put that message out there,” says Ó hAnnaidh. “Mainstream media has been trying to suppress that idea about the struggle in Palestine. People are looking at us as, I don’t know, a beacon of hope in some way – that this message will not be suppressed. The music is one thing, but the message is a big part of why we’re getting across.”

As working-class, early-career musicians, Kneecap have a lot more to lose by speaking out than more prominent artists, but Ó Cairealláin says this is beside the point. “You can get kind of bogged down talking about the people who aren’t talking enough or doing enough, but for us, it’s about talking about Palestine instead of pointing fingers,” he says. “There’s no doubt that there’s a lot of bands out there who could do a lot more, but hopefully just spreading awareness and being vocal and being unafraid will encourage them.”

Ó Dochartaigh adds: “We just want to stop people being murdered. There’s people starving to death, people being bombed every day. That’s the stuff we need to talk about, not fucking artists.”

There’s no doubt that Kneecap’s fearlessness when it comes to speaking about Palestine is a key part of their appeal for many: during a headline set at London’s Wide Awake festival last month, days after Ó hAnnaidh was charged for support of a terror organisation, an estimated 22,000 people chanted along with their calls of “free, free Palestine”. And thousands showed up to their Coachella sets – which the band allege is why so many pro-Israel groups were quick to push back on them, despite the fact that they had been displaying pro-Palestine messages for such a long time.

“We knew exactly that this was going to happen, maybe not to the extreme [level] that it has, but we knew that the Israeli lobbyists and the American government weren’t going to stand by idly while we spoke to thousands of young Americans who agree with us,” says Ó hAnnaidh. “They don’t want us coming to the American festivals, because they don’t want videos of young Americans chanting ‘free Palestine’ [even though] that is the actual belief in America. They just want to suppress it.”

The support for the message, says Ó Dochartaigh is “all genders, all religions, all colours, all creeds. Everybody knows what’s happening is wrong. You can’t even try to deny it now – Israel’s government is just acting with impunity and getting away with it. Us speaking out is a small detail – it’s the world’s governments that need to do something about it.”

Source: The Guardian

A decentralised, self-hosted trails database

Auto-generated description: A responsive user interface for the wanderer app is displayed across multiple devices, featuring trail details, a map, and elevation profile for a location named Breitenstein.

Three years ago I created a Fediverse instance called exercise.cafe for discussion related to fitness and exercise. I handed that over a year later, realising that discussion without data wasn’t really much use in that context.

Thanks to the latest update which I discovered thanks to Laurens Hof, there’s a new option: wanderer. It’s a “self-hosted trails database” which the developer says now has the functionality to “follow users, comment, like, share trails, and more across instances.”

I can imagine groups of friends who go hiking together, running or cycling clubs, or all kinds of enthusiasts using this. It’s great news, and I’m looking forward to giving it a try.

wanderer is a decentralized, self-hosted trail database. You can upload your recorded GPS tracks or create new ones and add various metadata to build an easily searchable catalogue.

Whether you’re hiking through remote mountains or biking across the city, wanderer makes it easy to plan, record, and revisit your adventures. Draw new routes, upload GPS files, and access your trail data from any device — all while keeping full control over your data.

Already tracking your adventures with Komoot or Strava? wanderer makes it easy to bring your existing trail history with you. With built-in support for both platforms, you can import your routes and activities directly — no file conversions needed. Consolidate your outdoor journeys in one place, fully under your control.

wanderer isn’t just about trails — it’s about the people who share them. Follow other users to see their latest routes, like and comment on trails you love, and get notified when someone adds something new. Whether you’re part of a local hiking group or just discovering new paths, wanderer makes it easy to stay connected — across instances and platforms.

Source: wanderer

To retain any institutions of higher education in this onslaught from techno-authoritarianism requires – now and hereafter – we redesign them

As ever, there’s a lot in this post by Audrey Watters. Her depth of knowledge, range, and experience means that she packs a lot into this article. I haven’t excerpted the parts where she talks about AI because, although valuable, relevant, and insightful, I think that the AI “crisis” (if we can call it that) in Higher Education is largely one of its own making.

I, the product of several universities, am composing this an airport sipping a matcha latte, on my way to help facilitate an event around digital credentialing in Higher Education. There is, and always has been, an air of privilege around graduating from a university — especially an elite one. The experience of going away as (usually) an 18 year old, studying at a high(er) level, and experimenting with one’s identity cannot be reduced to the credential that comes at the end of one’s studies. As a signal, it’s not granular enough to be anything other than a proxy for reinforcing vibes, prejudice, and class.

This is why I’ve been so interested in Open Badges over the last 14 years. It’s a way of putting the “means of credentialing” into the hands of everybody — although, of course, it’s helped reinforce the power of some incumbents.

Back to the article, and Audrey says something with which I fundamentally agree, and which she puts with an elegance I could never muster: we need to be investing more in people than technology. The two aren’t mutually exclusive, of course, but what this means in practice is for universities to think about where they are investing their money. Just as we’re advising Amnesty International UK at the moment, you need to think really carefully about who controls the systems and data that you’re using to try and make the world a better place.

Back in 2012 (“the year of the MOOC”), when Sebastian Thrun told Wired that, in fifty years time there would only be ten universities left in the world and his startup Udacity had a chance to be one of them, I admit, I laughed. I laughed and laughed and laughed – mostly at the idea that Udacity would still be around in a decade let alone five. The startup, while never profitable or even, in the words of its own founder, any damn good, was hailed as a “tech unicorn” and valued at over a billion dollars… at least until it was acquired by Accenture last year for an undisclosed amount of money and folded into the latter’s AI teaching platform. So I’m pretty confident in saying that no, in fifty years time, Udacity will not be around.

But the question of whether or not there’ll be ten universities left in the world remains an open one, sadly, as the attacks on education have only grown in the past few years.

[…]

The Trump Administration, along with Silicon Valley, are fully committed to the destruction of higher education – the destruction of specific institutions to be sure (Harvard and Columbia, most obviously), but to the entire university project. What we are witnessing is an attack on public institutions certainly, but also on the whole idea of education as a public good. It is, as Adam Serwer argues in The Atlantic, an attack on knowledge itself.

To retain any institutions of higher education in this onslaught from techno-authoritarianism requires – now and hereafter – we redesign them, reorient them towards human knowledge and human flourishing, away from compliance and cowardice. This means quite literally an investment in humans, not in technology infrastructure – particularly not infrastructure owned and controlled by powerful monopolies, hell-bent on profiteering and extraction, hell-bent on creating a world in which we’re all drained of agency and autonomy and, above all, of the confidence in our own intelligence and capabilities. Building human capacity in schools requires supporting more teachers and researchers and librarians, not fewer – people whose understanding of information access, knowledge sharing, and knowledge development exists far, far beyond the systems sold to schools, systems that actually serve to circumscribe what we do and how we think; people who care about people, who care about knowledge as a collective good, who care about education as a core pillar of democracy, as practice of freedom not as a market, not as a credential.

Source: Second Breakfast

Image: Andrew MacDonald

We love these people because of what they left us. Not because of what they had.

Auto-generated description: A colorful mosaic pattern featuring various geometric shapes and vibrant floral designs.

Since writing the post I’m about to cite, the author has passed away. It was recommended to me by Bryan Alexander, someone who I have the privilege to say replies to my Thought Shrapnel digest every week. We have a little back and forth, and that’s it until the next weekend; it is from these small interactions that we weave our relationships and our lives.

The author of the post, Helen De Cruz, held the Danforth Chair in the Humanities at Saint Louis University. She was only a couple of years older than me, being born in 1978, but seemingly packed a lot into those years — including editing and illustrating Philosophy Illustrated: Forty-Two Thought Experiments to Broaden Your Mind. (Interestingly, I was using a book on the shelves in my home office called Philographics just yesterday to explain some philosophical concepts to my teenage daughter. Of course the first one she wanted explaining was epiphenomenalism 😅)

Helen wrote this post — her last — while receiving hospice care last month, saying that writing it took “days rather than just one morning or afternoon.” It’s funny how those about to die have moments of clarity that few of us manage in our lifetime. The title of the post, not unsurprisingly, is Can’t take it with you. What I like about it is that it expresses what Aristotle would have called _eudaimonia, that it is through our own flourishing that we contribute to the happiness of others.

The richest man on earth is not happy yet he can buy and do whatever he wants. When we cherish people of the past they were not particularly wealthy. Marie Curie, Vincent Van Gogh, our wise grandmother … we love these people because of what they left us. Not because of what they had.

[…]

At funerals and other occasions we also notice that we cherish others for their quirks. Someone, say a recently deceased can be remembered as kind and loving, but also: he loved fishing and was a great Cardinals fan. It seems puzzling that being a sports fan contributes to someone’s virtue. But it does, because that was part of what made him who he is. Mark Alfano has done systematic studies on this by looking at obituaries and they show that the surviving relatives seem to think being a sports fan is a virtue. As is being a dancer or birder.

Susan Wolf already remarked in Moral saints that a perfectly moral being who would always act to help others would be boring. Such a person would not have hobbies or quirks. It’s all time that could be spent better. Yet, somehow these are valued and we love them in others. This intuition bolsters the idea that having projects and passions can be a virtue. But how?

Audre Lorde and Spinoza helped me to see that being a good person means flourishing in many domains. Lorde saw herself as a poet foremost, as Caleb Ward explains in his monograph on Lorde (in progress). But she was also a Black woman, a mother, an activist, and a lesbian (“a woman who loves other women” as she called it). She insisted on being recognised in all her dimensions.

Spinoza counterintuitively argued for an ethical egoism in Ethics. He says we need to benefit ourselves. But our selves are in his picture finite expressions of God. And in our limited way, we can be perfect. Becoming very rich, powerful or prestigious is not benefiting yourself because these are empty goods in his view. This explains why the richest man on earth is not happy and keeps on seeking validation.

Pursuing empty goods of prestige, honor you become anxious because, for instance, prestige is dependent on what others think of you. It is exhausting, hence Spinoza decided that he didn’t care of what others thought of him. And we still value his work for that.

Instead, you benefit yourself by expressing yourself as a full being, as a rose bush that flowers fully. People also delight in you.

Source: Wondering Freely

Image: Raimond Klavins

When Adam delved and Eve span, who was then the gentleman?​

Auto-generated description: Vibrant green line art depicts a revolt scene with figures holding pitchforks and torches in front of burning buildings, accompanied by a historical quote from John Ball in 1381.

Sadly, this this poster was sold out and removed from Johnny Greenteeth’s store before I was ready to buy it. I wasn’t sure about it being DayGlo! But then, I remembered that episode of Frasier where he explains that his style is “eclectic” and if you have good stuff it just all goes together…

Discovered via Warren Ellis the text is from John Ball, priest and one of the leaders of the Peasant’s Revolt in England during 1381:

My good friends, things cannot go on well in England, nor ever will until everything shall be in common, when there shall be neither vassal nor lord, and all distinctions levelled; when the lords shall be no more masters than ourselves.

Ball was the one who famously said:

When Adam delved and Eve span, who was then the gentleman?

I think you can tell a lot about people by thinking about which side of the Peasant’s Revolt they would have been on. Also, if you’re interested in this sort of thing, and don’t already know about the times when England has been close to revolution, then I recommend also reading about the Levellers and the Diggers who were prominent during the English Civil Wars in the 17th century.

Signal groups make it possible to have semi-public, but still incredibly private, spaces

Auto-generated description: A person is holding a smartphone displaying the Signal app logo, in a modern living room.

Just before composing this post, I created a Signal group for an upcoming event. Signal is my standard way to communicate with other people because it’s encrypted and not controlled by a Big Tech organisation. Why wouldn’t you want your conversations to be private by default?

We’re working with Amnesty International UK at the moment on a new community platform. One of the things that we’ll be recommending is that they think not just about the community platform itself, but about the ecosystem and the stack of technologies used by activists. We’d recommend that Signal is part of this.

If you’re new to Signal, especially if you operate in a sensitive context, you’ll find this post useful. The author, Micah Lee, has worked for the EFF and _The Intercept, he co-founded the Freedom of the Press Foundation, and develops tools like OnionShare and Dangerzone. He knows his stuff.

Signal groups, in particular, are more powerful than you might be aware of, even if you already use them all the time. In this post I’ll show you how to:

  • Turn an in-person meeting into a Signal group using QR codes
  • Manage large semi-public groups while still vetting new members
  • Make announcement-only groups, perfect for volunteer networks rapidly responding to things like ICE raids

In particular, I appreciated the advice of how to set up semi-public group chats, but with vetting:

I’m in a Signal group with about 500 people from around the world that focuses on digital rights. I’ve known some people in the group for years, but others I’ve never met. Still, it’s a safe place to discuss human rights tech issues without worrying about infiltration by fascists.

The rules include, “Be cool and be kind, or be kicked out,” and “New members need to be vouched for by an existing member.” There are five admins. If I have a friend who I think would be good to add to the group, I can invite them, and then vouch for them in the group, and one of the admins can let them in. If someone tries joining and no one vouches for them, they don’t get let in.

Signal groups make it possible to have semi-public, but still incredibly private, spaces like this. If we want to grow movements, we need to welcome many, many more people. Everyone isn’t going to know and trust everyone else, so a simple rule like “you need an existing member to vouch for you” is a great way to keep out the riff-raff. You can always choose to make more strict rules if you want, like requiring two people to vouch for new members.

He also explains how to have an announcement-only Signal group which is useful for organising, linking to this article about Sunbird, “an anonymous, real-time announcement and coordination platform” which uses this feature.

Source: [micahflee](micahflee.com/using-sig…

Image: Mika Baumeister

About that MIT paper on LLMs for essay writing...

Auto-generated description: A digital rendering of a brain-like structure made of interconnected nodes and lines hovers above a circular platform on a gradient background.

I suppose I should say something about this MIT research about the use of LLMs for essay writing. I can guarantee you that most people who are using this paper to justify the position that “the use of LLMs is a bad thing” haven’t even read the proper abstract, never mind the full paper. There’s a lot of “news” about it, which mostly links to this press release.

So let’s actually look at the properly, shall we? We’ll start with part of the actual abstract from the academic paper:

We assigned participants to three groups: LLM group, Search Engine group, Brain-only group, where each participant used a designated tool (or no tool in the latter) to write an essay. We conducted 3 sessions with the same group assignment for each participant. In the 4th session we asked LLM group participants to use no tools (we refer to them as LLM-to-Brain), and the Brain-only group participants were asked to use LLM (Brain-to-LLM). We recruited a total of 54 participants for Sessions 1, 2, 3, and 18 participants among them completed session 4.

The 54 participants is a red herring, as the claims being made in this paper are based on the number of people who completed the fourth session — a total of 18 participants. Nine first used no tool, and then used an LLM (“Brain-to-LLM”) and first used an LLM and then no tool (“LLM-to-Brain”).

There’s lots of neuroscience in this paper which I’m not in a position to comment on. What I am in a position to comment on is the research design, the claims being made, and the language used to express them. The first thing I’d say is the press release being titled Your Brain on ChatGPT is purposely channeling the This Is Your Brain On Drugs commercial which aired in the US in the 1980s. I’m not sure that’s a very neutral framing.

Second, any time I see an uncritical reference to “Cognitive Load Theory” in an academic paper, it’s a huge red flag for me. As Alfie Kohn points out it’s usually a way of justifying direct instruction. In other words, centring the teacher instead of the learner.

Third, the paper is poorly organised and written. For example, if one of my GCSE students back in the day had written the following, I’d have underlined it in red and written “VAGUE” next to it:

Overall, the debate between search engines and LLMs is quite polarized and the new wave of LLMs is about to undoubtedly shape how people learn.

One of the funniest things about the paper, though, is that the authors undoubtedly used AI to write sections of it. For example, here’s a random paragraph (p.19)

Screenshot of GPTzero saying '100% AI generated' and 'We are highly confident this text was AI generated

So using LLMs is bad for essay-writing, but good for writing academic articles? Please.

I could continue. For example, the research design was terrible (a random collection of people with different levels of educational experience and qualifications), session 4 was optional, and the participants had 20 minutes to write an essay, in amongst other things. I mean, if someone gave you the following, pointed you at an LLM and gave you 20 minutes, what would you do?

Many people believe that loyalty whether to an individual, an organization, or a nation means unconditional and unquestioning support no matter what. To these people, the withdrawal of support is by definition a betrayal of loyalty. But doesn’t true loyalty sometimes require us to be critical of those we are loyal to? If we see that they are doing something that we believe is wrong, doesn’t true loyalty require us to speak up, even if we must be critical?

It’s not like they were being prompted to turn in an actual paper. Unlike the authors of this poor excuse for one.

Anyway, life is short and this paper is terrible. I’ll continue to use LLMs in my everyday work, and have zero issues with students using them to complete badly-designed assessment tasks. Final note: academics using LLMs (sometimes to write part of their papers!) while chiding students for doing so is abject hypocrisy.

Source: arXiv

Images: Growtika / Screenshot from GPTzero


Update: I just saw, via a link from Stephen Downes, a TIME Magazine article about this paper which says it hasn’t been peer reviewed. I missed that fact, and while the process isn’t infallible it explains a lot

Misinformation and disinformation don’t actually need to convince anyone of anything to have an impact. They just need to make you question what you’re seeing.

A red wall displays a quote by Pierre Bourdieu about habitus being both a structuring structure and a structured structure.

Ryan Broderick is spot in this piece for Garbage Day about misinformation and disinformation. I do wonder why you’d want to continue using a service where you’re not quite sure what or who to believe. But then, I guess when most people are getting their news from social media, that is their information environment and questioning it might feel like questioning reality itself.

We live at a time where extremely high-resolution and extraordinarily detailed fake news can be generated almost instantly. But also, the threshold is (and always has been) extremely low for getting people to believe things — as the recent post about prompt injecting reality showed. When people spend so long online and don’t curate their own information environment, the habitus that guides their social actions can be actively dangerous.

You’d think the imminent breakdown of the global order would be worrying people more, but it’s hard to pay attention when you’re busy using AI to channel spirits and have ChatGPT-induced psychotic episodes. According to TikTokers, ChatGPT can “lift the veil between dimensions.” There’s also a guy on X who’s struggling to change the temperature of his AI-powered bed at the moment. The verified X user currently painting their roof blue to protect themselves from “direct energy weapons,” however, did not get the idea from an AI. They’re just the normal kind of internet insane.

[…]

It doesn’t matter if anyone believes the unreality of what they’re seeing online. Misinformation and disinformation don’t actually need to convince anyone of anything to have an impact. They just need to make you question what you’re seeing. The Big Lie and the millions of small ones online, whatever they happen to be wherever you’re living right now, just have to cause division. To wear you down. To provide an opening for those in power, who now have both too much of it and too few concerns about how to wield it. The populist demagogues and ravenous oligarchs the internet gave birth to in the 2010s are now firmly at the helm of the global order and, also, hooked up to the same chaotic, emotionally-gratifying global information networks that we all are, both social and, now, AI-generated. And, also like us, they are being heavily influenced by them in ways we can’t totally see or predict. Which is how we’ve ended up in a place where missiles are flying, planes are dropping out of the sky, and vulnerable people are being thrown in gulags, all while our leaders are shitposting about their big, beautiful plans for more extrajudicial arrests and genocidal territorial expansion. Assured by mindless AI chatbots that their dreams of world domination and self-enrichment are valid and noble and righteous. And there is no off ramp there. Everyone, even the folks with the nuclear codes, is entertaining themselves online as the world burns. Posting through it and monitoring the situation until it finally reaches their doorstep and forces them to look up from their phone and log off.

Source: Garbage Day

Image: Andrea De Santis

GPQA is difficult enough to be useful for scalable oversight research on future models significantly more capable than the best existing public models

A graph illustrates the shifting frontier of AI model performance and cost over time, comparing various models' GPQA Diamond Scores and costs per million tokens.

The GPQA is “a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry” where even experts with unrestricted access to the web only reach 64% accuracy. It’s a benchmark used to rate generative AI models and, as Ethan Mollick notes using the chart he created above, they’re getting better at the GPQA even while the cost is coming down.

I used MiniMax Agent today, a new agentic AI webapp based on MiniMax-M1, “the world’s first open-source, large-scale, hybrid-attention reasoning model” according to the press release. It was impressive, both in terms of capability and flexibility of output. The kind of chain-of-reasoning it uses is going to be very useful to knowledge workers and researchers like me.

MiniMax-M1 is probably on a par with the ChatGPT o3 model, but of course it’s both Chinese and open source, so a direct competitor to OpenAI. I stopped using OpenAI’s products in January when it became clear that using them involved about the same level of associated cringe as driving a Tesla in 2025.

The questions are reasonably objective: experts achieve 65% accuracy, and many of their errors arise not from disagreement over the correct answer to the question, but mistakes due to the question’s sheer difficulty (when accounting for this conservatively, expert agreement is 74%). In contrast, our non-experts achieve only 34% accuracy, and GPT-4 with few-shot chain-of-thought prompting achieves 39%, where 25% accuracy is random chance. This confirms that GPQA is difficult enough to be useful for scalable oversight research on future models significantly more capable than the best existing public models.

Sources: arXiv & Ethan Mollick | LinkedIn

Our society is in the thrall of dumb management, and functions as such

Two signs stating Business As Usual are mounted on a weathered wall near a doorway.

It’s not easy to summarise this 13,000-word article by Ed Zitron, nor decide which parts to pull out and highlight. The main gist is that our economy is dominated by managers who lack real understanding of their businesses and customers. Their poor decisions are fueled by decades of neoliberal thinking, which promotes short-term gains over meaningful contributions. The name Zitron gives to these managers is “Business Idiots” who thrive on alienation and avoid accountability.

I think he’s using this term because ranting about rich people in an unequal society is pointless; most are desperately looking upwards trying to copy behaviours which might pull them out of the mire. Also, talking about “Big Tech” is meaningless, because it’s difficult for people to understand structures and systems. So, to personify things, Zitron uses “Business Idiots” to make his points. I don’t disagree with him, but it is an argument which lacks nuance, despite the number of words used and links sprinkled liberally amongst the paragraphs. What he’s really talking about, as he tends to, is generative AI.

Perhaps it’s easier to take some of the highlights I made of the article and rearrange them to make a bit more sense. I’m not saying that Zitron doesn’t make sense, just that, if I presented them in the order in which I highlighted them, they wouldn’t benefit from the structure of the entire article.

Let’s start here:

On some level, modern corporate power structures are a giant game of telephone where vibes beget further vibes, where managers only kind-of-sort-of understand what’s going on, and the more vague one’s understanding is, the more likely you are to lean toward what’s good, or easy, or makes you feel warm and fuzzy inside.

Zitron has an issue with managers within large, hierarchical, for-profit businesses. He talks about hiring being broken (something I’ve talked about a lot) but in a way which situates it with the “vibe-based structure” outlined above:

We live in a symbolic economy where we apply for jobs, writing CVs and cover letters to resemble a certain kind of hire, with our resume read by someone who doesn’t do or understand our job, but yet is responsible for determining whether we’re worthy of going to the next step of the hiring process. All this so that we might get an interview with a manager or executive who will decide whether they think we can do it. We are managed by people whose job is implicitly not to do work, but oversee it. We are, as children (and young adults), encouraged to aspire to become a manager or executive, to “own our own business,” to “have people that work for us,” and the terms of our society are, by default, that management is not a role you work at, so much as a position you hold — a figurehead that passes the buck and makes far more of them than you do.

[…]

It’s about “managing people,” and that can mean just about anything, but often means “who do I take credit from or pass blame to,” because modern management has been stripped of all meaning other than continually reinforcing power structures for the next manager up.

I don’t think this is a modern phenomenon. I think that someone reading this in, say, the 1960s, would recognise this problem. The issue is hierarchy. The issue is capitalism.

The difference is that we now live within a neoliberal world order. But, again, Zitron isn’t really saying anything new here when again, later in the article, talks about us living in a “symbolic society.” The situationists such as Guy Debord were talking about this decades ago. It has long been thus.

I believe this process has created a symbolic society — one where people are elevated not by any actual ability to do something or knowledge they may have, but by their ability to make the right noises and look the right way to get ahead. The power structures of modern society are run by business idiots — people that have learned enough to impress the people above them, because the business idiots have had power for decades. They have bred out true meritocracy or achievement or value-creation in favor of symbolic growth and superficial intelligence, because real work is hard, and there are so many of them in power they’ve all found a way to work together.

What has changed — and this why I prefer reading someone measured and insightful like Cory Doctorow — is that the policy environment has changed. This has enabled and encouraged what Zitron calls the “business idiot” to flourish.

⁠Big companies build products sold by specious executives or managers to other specious executives, and thus the products themselves stop resembling things that solve problems so much as they resemble a solution. After all, the person buying it — at least at the scale of a public company — isn’t necessarily the recipient of the final product, so they too are trained (and selected) to make calls based on vibes.

[…]

Our society is in the thrall of dumb management, and functions as such. Every government, the top quarter of every org chart, features little Neros who, instead of battling the fire engulfing Rome, are sat in their palaces strumming an off-key version of “Wonderwall” on the lyre and grumbling about how the firefighters need to work harder, and maybe we could replace them with an LLM and a smart sprinkler system.

The reason that executives can move between the top echelons of society even after serial failure is because of regulatory capture and the resultant lack of punishment for white-collar crime. If we rinse-and-repeat this kind of behaviour enough, we end up with money moving to the top of society at the expense of the rest of us. Governments, frightened of the elites, impose austerity policies, enter “public-private partnerships” and otherwise indemnify rich people from the downsides of their speculation.

Our economy in the west is therefore one where the only real game in town is to create products and services for individuals and businesses with money. And because of the regulatory environment, these are not, by and large good companies that exist to promote human flourishing:

The Business Idiot’s economy is one built for other Business Idiots. They can only make things that sell to companies that must always be in flux — which is the preferred environment of the Business Idiot, because if they’re not perpetually starting new initiatives and jumping on new “innovations,” they’d actually have to interact with the underlying production of the company. As these men – and it’s almost almost men – gain more political power, this situation is only likely to get worse. “You should believe people when they tell you who they are,” is advice I’ve been given before. You should also believe people when they tell you what their version of a utopian future looks like. I’m not sure the general population’s vision is in line with that of tech billionaires: These people are antithetical to what’s good in the world, and their power deprives us of happiness, the ability to thrive, and honestly any true innovation. The Business Idiot thrives on alienation — on distancing themselves from the customer and the thing they consume, and in many ways from society itself. Mark Zuckerberg wants us to have fake friends, Sam Altman wants us to have fake colleagues, and an increasingly loud group of executives salivate at the idea of replacing us with a fake version of us that will make a shittier version of what we make for a customer that said executive doesn’t fucking care about.

They’re building products for other people that don’t interact with the real world. We are no longer their customers, and so, we’re worth even less than before — which, as is the case in a world dominated by shareholder supremacy, not all that much.

They do not exist to make us better — the Business Idiot doesn’t really care about the real world, or what you do, or who you are, or anything other than your contribution to their power and wealth. This is why so many squealing little middle managers look up to the Musks and Altmans of the world, because they see in them the same kind of specious corporate authoritarian, someone above work, and thinking, and knowledge.

[…]

These people don’t want to automate work, they want to automate existence. They fantasize about hitting a button and something happening, because experiencing — living! — is beneath them, or at least your lives and your wants and your joy are. They don’t want to plan their kids’ birthday parties. They don’t want to research things. They don’t value culture or art or beauty. They want to skip to the end, hit fast-forward on anything, because human struggle is for the poor or unworthy.

Meanwhile, of course, young people – and especially young men – are spending hours each day on social media platforms owned by these tech billionaires. Their algorithms valorise topics and ideas which promote various forms of alienation. I’m not particularly hopeful for the future, especially after reading articles like this.

But the thing is, I think that writers such as Zitron have a duty to spell out the kind of utopia that he thinks we should be striving for. As with other techno-critics, it’s all very well pointing out how terrible things and people are, but if this is what you are doing, you need to be explicit about your position. What do you stand for? It’s very easy to point and one thing after another saying “this is terrible,” “that person is awful,” “this is broken,” etc. What’s much harder is to argue and fight for a world where the things you dislike are fixed.

Source: Where’s Your Ed At

Image: Hoyoun Lee

Prompt injecting reality

It’s easy to think that people who fall for misinformation are somehow stupid. However, a lot of what counts as ‘plausible’ information depends on the context in which its presented. Sending out a million fake ‘DHL has got your parcel and needs extra payment’ messages is successful to the scammer if 1% of recipients are expecting such a parcel. If 0.01% of the overall group click on the link, that’s still 100 people scammed.

You may or may not have seen that there has been some ‘backlash’ about the design changes in iOS 26. The approach, named “Liquid Glass” has been criticised by accessibility and usability experts, which leads to this plausible-looking tweet:

A satirical tweet about being fired by Apple is featured amid various news logos.

Several news outlets reported on this as fact, meaning that Google News ended up looking like this (screenshot by Georg Zoeller:

Various news headlines about Apple firing a lead designer of the new iOS 26 Liquid Glass UI are displayed on a smartphone screen.

Fake news, but with real consequences. Is Yongfook what he says he is? Of course not! (screenshot again by Georg Zoeller):

A person shares two contrasting tweets, one claiming to be a 42-year-old running a $550,000/year business, and another claiming to be a 17-year-old with a $10 million/month business.

As I said many moons ago, our information environment is crucial to a flourishing democracy and civil society

Source: The Quint

Sandwich bags for cheese, blister plasters, and a 'bubble of pain'

Auto-generated description: A spiral notebook and a pencil are placed on a red and green background.

There was a time, in the BuzzFeed era, where ‘listicles’ were everywhere. It seemed like everything was a list, and you couldn’t escape them. A decade or so later, we’re seeing more of a balance in the force, and so lists are useful rather than egregious.

This list in The Guardian is entitled ‘52 tiny annoying problems, solved!’ and I’d like to share a few of the suggestions which caught my eye.

I have a sandwich bag in my fridge of all the odds and ends of cheese; they keep for ages. I would always freeze feta, though, as it doesn’t last long. Likewise, keep any last little bits of carrot, onion or other veg in a bag and next time you are making a ragu or soup, chuck them in. If you buy a pot of cream for a recipe and use only a small amount, freeze the rest in an ice cube tray. Do the same with wine. GH

One idea I’ve found useful for dealing with irritating interruptions when you’re trying to concentrate is: be careful not to define more things than necessary as “interruptions”. If you’re the kind of person who tries to schedule your whole day very strictly, you’re pretty much asking to feel annoyed when reality collides with your rigid plan. If you have autonomy over your schedule, a better idea is to try to safeguard three or four hours at most for total focus – this is, it turns out, the maximum countless authors, scientists and artists have managed in an uninterrupted fashion anyway. If I’m working at home on a day when it’s not my turn for school pickup, and my son bursts in to tell me excitedly about something he’s done, it’s a shame if I feel annoyed by the intrusion rather than delighted by the serendipitous interaction, solely because I’ve defined that period as time for deep focus. OB

I discovered this by accident, but unsolicited door-knockers are eager to conclude their business and go away if you open the door while holding some kind of large electric gardening implement. I just happened to be carrying a hedge trimmer when the bell rang, but a chainsaw would be even better. You could leave it on a hook by the door. TD

Sooner or later, if you are running you will get a big bastard blister on your heel, and there is no point using anything other than one of those expensive padded blister plasters. Normal plasters won’t get you home without pain, or let you run again next day. PD

When someone has a minor injury, such as stubbing their toe, give them a full minute to themselves so they can enter, then exit, their “bubble of pain”. This is what we do in our family and I swear it helps get rid of pain much faster. We don’t ask, “What happened?” or, “Are you OK?” until the injured person speaks first. A hand on their shoulder or a respectful bowing of the head to the Gods of Minor Pain is sufficient at this time. Anonymous

May I just +1 the advice about blister plasters? If you’ve never used them, I don’t think you can possibly understand how much better they are than regular plasters. Next time you’re stocking up your first aid kid, consider buying some!

Source: The Guardian

Image: Seaview N.

Minimum Viable Organisations: low emotional labour, low technical labour, zero cost

Auto-generated description: A vibrant abstract pattern features swirling, multicolored lines with a dynamic flow.

I can’t believe it’s been 12 years since I published a series of posts entitled Minimum Viable Bureaucracy, based on the work of Laura Thomson, who worked for the Mozilla Corporation (while I was at the Foundation).

So what’s it about? What is ‘Minimum Viable Bureaucracy’ (MVB)? Well, as Laura rather succinctly explains, it’s the difference between ‘getting your ducks in a row’ and ‘having self-organising ducks’. MVB is a way of having just enough process to make things work, but not so much as to make it cumbersome. It’s named after Eric Ries’ idea of a Minimum Viable Product which, “has just those features that allow the product to be deployed, and no more.”

The contents of Laura’s talk include:

  • Basics of chaordic systems
  • Building trust and preserving autonomy
  • Effective communication practices
  • Problem solving in a less-structured environment
  • Goals, scheduling, and anti-estimation
  • Shipping and managing scope creep and perfectionism
  • How to lead instead of merely managing
  • Emergent process and how to iterate

I truly believe that MVB is an approach that can be used in whole or in part in any kind of organisation. Obviously, a technology company with a talented, tech-savvy, distributed workforce is going to be an ideal testbed, but there’s much in here that can be adopted by even the most reactionary, stuffy institution.

I’ve spent nine of the last ten years since leaving Mozilla as part of a worker-owned cooperative, and part of a couple of networks of co-ops. I’ve learned many, many things, including that hierarchy is just a lazy default, ways to deal with conflict, and (perhaps most importantly) consent-based decision making.

Which brings me to this post, which talks about ‘Minimum Viable Organisations’. The author, Dr Kim Foale, of the excellent GFSC. They call it a work in progress, and start with the following:

Basic principle: It should be easy (low emotional labour, low technical labour, zero cost) to start a project with a small group of people with shared goals.

The list of reasons Kim gives as to why groups ‘fail’ seems familiar to me, as it might do to you:

  • Lack of care of people in the group
  • Over-reliance on attendance at organising meetings as a prerequisite for being in the group
  • As groups grow in numbers, making any kind of decision becomes more and more difficult
  • Trying to fix every problem / having too broad a remit
  • Poor record keeping and attention to process
  • Misunderstanding and misuse of consensus processes

It’s worth noting on the last point that ‘consensus’ and ‘consent’ sound very similar but are very different approaches. With the first you’re trying to get full agreement, while with the latter you’re trying to achieve alignment.

What Kim suggests is all very sensible. Things like a written constitution, a code of conduct, a minimum commitment requirement, and a process by which members can change things. It’s an unfinished post, so I’m assuming they’re coming back to finish it off.

For me, the combination of having a stated aim, code of conduct, and working openly usually leads to good results. The minimum commitment requirement is an interesting addition, though, and one I’ll noodle on.

Source: kim.town

Image: Tomáš Petz


(I did a bit of digging and it looks like Kim’s using Quartz to power their site, probably linked to Obsidian. The idea of turning either my personal blog or Thought Shrapnel into a digital garden is quite appealing. More info on options for this here

Drowning in culture, we skim, we rush, we skip over.

Auto-generated description: A black Sony camera is placed on a bright yellow background.

For some reason, an article from 2012 about “Bliss” — the name given to the famous Windows XP background of a grassy hill and blue sky — was near the top of Hacker News earlier this week. The photographer, Charles O’Rear, explains how it was all very serendipitous:

For such a famous photograph, O’Rear says it was almost embarrassingly easy to make. ‘Photographers like to become famous for pictures they created,’ he told the Napa Valley Register in an interview in 2010. ‘I didn’t “create” this. I just happened to be there at the right moment and documented it.

‘If you are Ansel Adams and you take a particular picture of Half Dome [in Yosemite National Park] and want to light it in a certain way, you manipulate the light. He was famous for going into the darkroom and burning and dodging. Well, this is none of that.’

Which brings me to the post I actually want to talk about, by Lee A Johnson, who is also a professional photographer. It’s effectively a 10-year retrospective on his career, which takes in changes in his field, technology, and his own personal development. I absolutely loved reading it, and encourage you to take the time to do so.

I’m just going to excerpt some parts that (hopefully) don’t require the narrative of the rest of the post to make sense. (I wasn’t sure about casually using a photograph taken by a professional without an explicit license to illustrate this blog, so I’ve used another.)

I started writing this post on my iPhone during an overnight stay on Prince Edward Island (PEI) sometime in 2015. One stop on a short road trip in Canada. The photo I uploaded to Instagram at the time confirms that was indeed exactly a decade ago1. Since then I’ve completed a few long-term projects, visited numerous portfolio reviews, been to several countries, photographic retreats, galleries, exhibitions, book festivals, talks. All in pursuit of understanding what I’m doing with the photography I am taking.

What I’m trying to say is that the scope of the post crept over those ten years. It’s all a bit of a mess.

[…]

Photography finds itself in an interesting place. So common it’s like breathing. Everyone has a camera in their pocket, and we’re collectively producing more photographs every week than were taken in the entire 20th century. Soon that will be every day. Then every hour. Most won’t survive the next phone upgrade, let alone be seen by human eyes.

And photography is now easy, really. Easier than ever. The technicalities can be picked up by anyone in five minutes. It’s much harder and takes much longer to figure out what you want to say, to develop a visual language that’s truly your own. To create something that stands out. Something outstanding.

[…]

Ephemeral photos, long-term projects? Most of what I photograph will never matter to anyone but me. Of the tens of thousands of frames I’ve shot, perhaps a few dozen will outlive me, and even fewer will be seen by strangers a century from now. So why bother with long-term projects, with work that takes years to complete, when the cultural landscape shifts so rapidly that by the time you’re finished the conversation has moved on? Because the long-term projects, the works with depth and commitment behind them, are the ones that have any chance of lasting impact.

[…]

Drowning in culture, we skim, we rush, we skip over. At the same time we favourite too much, follow too much, the signal to noise ratio is larger than ever. Our attention has become a commodity, harvested and sold by platforms that profit from our endless scrolling. We open tabs for articles we’ll never read, save posts we’ll never revisit, follow accounts whose content blurs together into an indistinguishable stream.

[…]

Really we’re all on one big curve, an exponential curve to nowhere. Inevitable, given an exponential curve is not sustainable. The democratization of photography, the explosion of content, the fragmentation of audience - it’s all happening at a pace that makes it hard to find stable ground. We’re constantly racing to catch up, feeling behind, trying to make sense of a landscape that transforms even as we observe it7.

[…]

Almost gone are the days of a human looking at work and deciding what is worth looking at, now replaced with machine learning and algorithms to tell us instead. But what does a computer know about art? Because of that you can be sure what i’m seeing is not the same as what anyone else is seeing. A shared culture disjoint to keep you on the platform. Keep you scrolling. Keep you viewing ads.

If you jump out of your own petri dish you will always find the culture much different, this has always been the case. Now we have the web throwing all the samples in a bucket and saying have a bit of everything. If you don’t like it then the next sample is only a click away.

Does that mean the culture’s impact is diluted? Probably. Does it matter? Probably not, but it follows that if we are defined by our cultural interests then a larger variety of parts leads to a far more interesting variety of wholes. There will be fewer parts in common, if that is the case then perhaps that means we should have more to talk about. Tell me about the things I don’t know, or haven’t seen.

Fill in the gaps.

Source: leejo.github.io

Image: C D-X

6 AI use case primitives

Auto-generated description: Six ways to use AI are illustrated, focusing on automating tasks, generating ideas, analyzing data, creating content, discovering insights, and developing tools.

It’s not often I link directly to a LinkedIn post. However, the author of this, Ben Cohen, doesn’t seem to have posted it elsewhere, so needs must. Cohen also doesn’t cite the original source of the analysis he references, but it looks like it comes from an OpenAI report entitled Identifying and scaling AI use cases in which the “six use case primitives” are:

  1. Content creation
  2. Research
  3. Coding
  4. Data analysis
  5. Ideation/strategy
  6. Automation

Cohen has renamed these into much snappier “stuff” language, along with a graphic (see above) which looks like the OpenAI logo. I like this framing, it resonates.

I do wonder how many of these six use cases that vehemently anti-AI critics have actually used. I can confidently say I’ve used them all, and probably hit four of the six categories most working days at the moment.

Turns out there are only six ways to use AI well.

OpenAI looked at 600+ of the most successful GenAI use cases.

Every single one fell into just 6 categories (which I’ve taken the liberty to rename):

Create stuff → Content, policies, presentations, images, emails, contracts

Find stuff → Insights, research, competitor analysis, trends

Build stuff → Tools, websites, apps

Make sense of stuff → Data analysis, dashboards, performance reports

Think stuff through → Idea generation, strategies, decision-making

Do stuff automatically → Workflows, email automation, customer chatbots

That’s it.

Source & image: Ben Cohen | LinkedIn

The workload fairy tale

Auto-generated description: A garden gnome with a red hat and white beard sits in a meditative position surrounded by colourful flowers and lush greenery.

Most people are very surprised when I say that I work around 20-25 hours per week. I then clarify that this is paid work, so not things like blogging, doing lots of reading, looking for business development leads, giving free advice, etc.

Still, it means that I have a life where I can exercise every day, be around for my kids, and manage my stress/anxiety levels. While not everyone runs their own business, most knowledge workers do have a fair amount of freedom. As Cal Newport points out in this article, the 4-day workweek is a way of pushing back against the expectation that a company owns all of your time.

So, I’d say that the 4-day workweek is more of a mindset change, especially if you’re getting done the same amount as before. I’d definitely try it! When I worked at Moodle, I did a 4-day week, and being able to say “I won’t be able to as I don’t work Fridays” or similar is as much as a story you tell yourself as one you tell other people.

Another thing, which we try and do at WAO is to co-work on projects, and not to switch between multiple projects within one day. So, if we’ve got three projects on the go, we’ll try and dedicate either a whole day to one, or a morning to one, an afternoon to another, and leave the third until the next day. Of course, it doesn’t always work out like that, but collaborating with others (not just having meetings with them!) and allocating time to different projects makes them not only manageable, but… maybe even enjoyable?

Most knowledge workers are granted substantial autonomy to control their workload. It’s technically up to them when to say “yes” and when to say “no” to requests, and there’s no direct supervision of their current load of tasks and projects, nor is there any guidance about what this load should ideally be.

Many workers deal with the complexity of this reality by telling themselves what I sometimes call the workload fairy tale, which is the idea that their current commitments and obligations represent the exact amount of work they need to be doing to succeed in their position.

The results of the 4-day work week experiment, however, undermine this belief. The key work – the efforts that really matter – turned out to require less than forty hours a week of effort, so even with a reduced schedule, the participants could still fit it all in. Contrary to the workload fairytale, much of our weekly work might be, from a strict value production perspective, optional.

So why is everyone always so busy? Because in modern knowledge work we associate activity with usefulness (a concept I call “pseudo-productivity” in my book), so we keep saying “yes,” or inventing frenetic digital chores, until we’ve filled in every last minute of our workweek with action. We don’t realize we’re doing this, but instead grasp onto the workload fairy tale’s insistence that our full schedule represents exactly what we need to be doing, and any less would be an abdication of our professional duties.

The results from the 4-day work week not only push back against this fairy tale, but also provide us with a hint about how we could make work better. If we treated workload management seriously, and were transparent about how much each person is doing, and what load is optimal for their position; if we were willing to experiment with different possible configurations of these loads, and strategies for keeping them sustainable, we might move closer to a productive knowledge sector (in a traditional economic sense) free of the exhausting busy freneticism that describes our current moment. A world of work with breathing room and margin, where key stuff gets the attention it deserves, but not every day is reduced to a jittery jumble.

Source: Cal Newport

Image: Dorota Dylka

The question remains, though, what will be left to browse.

Auto-generated description: A laptop screen displays a webpage with a search box titled What do you want to know? above a keyboard.

Let’s say that, as often happens, I half-remember an article that I’ve been reading. It’s not in my Reader saves, so what am I going to do? Even this time last year, I would have typed what I could remember into my browser address bar, which would then take me to my default search engine: DuckDuckGo.

Over the last few months, however, for anything more complex than just quickly looking something up, I’ve been using Perplexity, which allows you to search the web (default), as well as academic and social sites such as Reddit. Unlike other LLMs, its not sycophantic, and it always shows its sources.

Casey Newton discusses the advent of the AI-first browser which uses ‘agents’ to go searching on your behalf. I’m kind of already doing this. And before you judge me, let’s just reflect on the fact that almost 40% of people click on the first result on Google results, and [fewer than 0.5% go past the first page of search engine results. So even an LLM that goes out, reads 20 links and presents back the most salient results is already doing a better job.

[T]he decline of the web has been met with a surprising counter-phenomenon: a huge investment in new web browsers.

On Wednesday, Opera — the Norwegian company whose namesake browser commands about 2 percent market share worldwide — announced that it is building a new browser.

Two days earlier, the Browser Company said it plans to open source its Arc browser and turn its efforts fully to a new one.

The moves came a few months after “answer engine” company Perplexity teased a new browser of its own called Comet. And while the company has not confirmed it, OpenAI has reportedly been working on a browser for more than six months.

It has been a long time since the internet saw a proper browser war. The first, in the earliest days of the web, saw Microsoft’s Internet Explorer defeat Netscape Navigator decisively. (Though not before a bruising antitrust trial.) In the second, which ran from roughly 2004 to 2017, new browsers from Mozilla (Firefox) and Google (Chrome) emerged to challenge Internet Explorer and eventually kill it. Today the majority of web users use Chrome.

[…]

“Traditional browsers were built to load webpages,” said Josh Miller, the Browser Company’s CEO, in a post announcing its forthcoming Dia browser. “But increasingly, webpages — apps, articles, and files — will become tool calls with AI chat interfaces. In many ways, chat interfaces are already acting like browsers: they search, read, generate, respond. They interact with APIs, LLMs, databases. And people are spending hours a day in them. If you’re skeptical, call a cousin in high school or college — natural language interfaces, which abstract away the tedium of old computing paradigms, are here to stay.”

Perplexity is one of my pinned tabs both on my desktop and laptop. I use it multiple times every day in both professional and personal contexts, for example when researching information that is helping me make a decision about car leasing. This year, I’ve also used it to help decipher medical records, pull out information from extremely dense reports, and synthesise information from multiple sources.

It does feel a bit like a superpower when you use these things well. But, as Newton points out, as the business model for putting content on the web fails, where are AI browsers going to get their information from?

[I]t’s easy to imagine the possibilities for an AI browser. It could function as a research assistant, exploring topics on your behalf and keeping tabs on new developments automatically. It could take your to-do list and attempt to complete tasks for you while you’re away. It could serve as a companion for you while you browse, identifying factual errors and suggesting further reading.

[…]

The question remains, though, what will be left to browse. The entire structure of the web — from journalism to e-commerce and beyond — is built on the idea that webpages are being viewed by people. When it’s mostly code that is doing the looking, a lot of basic assumptions are going to get broken.

Source: Platformer

Image: almoya

Maximum fines have never before been applied simultaneously, but some might say these scoundrels have earned it.​

Auto-generated description: A diagram illustrates the interaction between Meta and Yandex systems, detailing data tracking, user identity handling, and potential risks through mobile browsers and apps.

Technical things don’t interest most people. I’m definitely at the edges of my understanding with this one, but the implications are pretty huge. Essentially, Meta (the organisation behind Facebook, Instagram, and WhatsApp) and Yandex have been caught covertly tracking users on Android devices via a novel method.

On the day that this disclosure was made public, Meta “mysteriously” stopped using this technique. But, by that point, they’d been using it for well over six months, and it appears that Yandex (a Russian tech company) has been using it for EIGHT YEARS.

The website dedicated to the disclosure is, as you’d expect, pretty technical. But it does say this:

This novel tracking method exploits unrestricted access to localhost sockets on the Android platforms, including most Android browsers. As we show, these trackers perform this practice without user awareness, as current privacy controls (e.g., sandboxing approaches, mobile platform and browser permissions, web consent models, incognito modes, resetting mobile advertising IDs, or clearing cookies) are insufficient to control and mitigate it.

We note that localhost communications may be used for legitimate purposes such as web development. However, the research community has raised concerns about localhost sockets becoming a potential vector for data leakage and persistent tracking. To the best of our knowledge, however, no evidence of real-world abuse for persistent user tracking across platforms has been reported until our disclosure.

A Spanish site called Zero Party Data, which also posts in English explains what’s going in an easier-to-understand way:

Meta devised an ingenious system (“localhost tracking”) that bypassed Android’s sandbox protections to identify you while browsing on your mobile phone — even if you used a VPN, the browser’s incognito mode, and refused or deleted cookies in every session.

[…]

Meta faces simultaneous liability under the following regulations, listed from least to most severe: GDPR, DSA, and DMA (I’m not even including the ePrivacy Directive because it’s laughable).

GDPR, DMA, and DSA protect different legal interests, so the penalties under each can be imposed cumulatively.

The combined theoretical maximum risk amounts to approximately €32 billion** (4% + 6% + 10% of Meta’s global annual revenue, which surpassed €164 billion in 2024).

Maximum fines have never before been applied simultaneously, but some might say these scoundrels have earned it.

Briefly, here’s how it works (according to the above website):

  • Step 1: The app installs a hidden “intercom”
  • Step 2: You think, “hmm, nice day to check out my guilty pleasure website in incognito mode.”
  • Step 3: The web pixel talks to the Facebook/Instagram app using WebRTC
  • Step 4: The same pixel on your favorite website, without hesitation, sends your alphanumeric sausage over the internet to Meta’s servers
  • Step 5: The app receives the message and links it to your real identity

This is why I don’t use apps from Meta and use a security-hardened version of Android called GrapheneOS

Sources: Local Mess / Zero Party Data

Image: Local Mess

Delightful Fediverse apps

Screenshot of 'Social Verifiable Credentials' section of website

I’m sharing this list of “delightful fediverse apps” as it includes a couple of things (Bonfire, BadgeFed) in which I’m particularly interested. It’s also a great example of how many types of difference service can be created via a protocols-based approach.

A curated list of fediverse software that offer decentralized social networking services based on the W3C ActivityPub family of related protocols.

Source: delightful.coding.social

Expert-in-the-loop vs. layperson-in-the-loop

Auto-generated description: A comparison between stacks of annotated, crumpled documents and a cleaner, structured digital map is shown, highlighting efficient digitization.

This is on a Google blog, so it foregrounds the use of their Gemini AI model in a new tool called Extract, built by the UK Government’s AI Incubator team. I should imagine that they’d be able to switch Gemini out for any model that has “advanced visual reasoning and multi-modal capabilities.” At least, I’d hope so.

So long as there is some kind of expert-in-the-loop, I think this is a great use of AI in public services. Planning in the UK, as I should imagine it is in most countries, is outdated, awkward, and slow. Speeding things up, especially if it allows multiple factors to be considered automatically, is a great idea.

A couple of years ago, I pored over technical documents I didn’t understand, feeding them into LLMs to try and figure out whether or not to buy a house by a river that had previously flooded, but now had flood defences. I was not an “expert-in-the-loop” but instead a “layperson-in-the-loop.” There’s a difference.

Traditional planning applications often require complex, paper-based documents. Comparing applications with local planning restrictions and approvals is a time-consuming task. Extract helps councils to quickly convert their mountains of planning documents into digital structured data, drastically reducing the barriers to adopting modern digital planning systems, and the need to manually check around 350,000 planning applications in England every year.

Once councils start using Extract, they will be able to provide more efficient planning services with simpler processes and democratised information, reducing council workload and speeding up planning processes for the public. However, converting a single planning document currently takes up to 2 hours for a planning professional – and there are hundreds of thousands of documents sitting in filing cabinets across the country. Extract can remove this bottleneck by accelerating the conversion with AI.

As the UK Government highlights, “The new generative AI tool will turn old planning documents—including blurry maps and handwritten notes—into clear, digital data in just 40 seconds – drastically reducing the time it takes planners.”

Using modern data and software, councils will be able to make informed decisions faster, which could lead to quicker application processing times for things like home improvements, and more time freed up for council staff to focus on strategic planning. Extract is being tested with planning officials at four Councils around the country including Hillingdon Council, Westminster City Council, Nuneaton and Bedworth Council and Exeter City Council and will be made available to all councils by Spring 2026.

Source: The Keyword

A goal set at time T is a bet on the future from a position of ignorance

Screenshot of Joan Westenberg's blog with a 3-column them

Not only do I really like Joan Westenberg’s blog theme (Thesis, for Ghost) but this post in particular. If there’s one thing I’ve learned from my life, career, reading Stoic philosophy, and studying Systems Thinking, it’s that there are some things you can control, and some things you can’t.

Coming up with a ‘strategy’ or a ‘goal’ that does not take into account the wider context in which you do or will operate is foolish. Naive, even. Instead, setting constraints makes much more sense. What Westenberg is advocating for here, without saying it explicitly, is a systems thinking approach to life.

You can read my 3-part Introduction to Systems Thinking on the WAO blog (which, coincidentally, we’ll soon be moving to Ghost)

Setting goals feels like action. It gives you the warm sense of progress without the discomfort of change. You can spend hours calibrating, optimizing, refining your goals. You can build a Notion dashboard. You can make a spreadsheet. You can go on a dopamine-fueled productivity binge and still never do anything meaningful.

Because goals are often surrogates for clarity. We set goals when we’re uncertain about what we really want. The goal becomes a placeholder. It acts as a proxy for direction, not a result of it.

[…]

A goal set at time T is a bet on the future from a position of ignorance. The more volatile the domain, the more brittle that bet becomes.

This is where smart people get stuck. The brighter you are, the more coherent your plans tend to look on paper. But plans are scripts. And reality is improvisation.

Constraints scale better because they don’t assume knowledge. They are adaptive. They respond to feedback. A small team that decides, “We will not hire until we have product-market fit” has created a constraint that guides decisions without locking in a prediction. A founder who says, “I will only build products I can explain to a teenager in 60 seconds” is using a constraint as a filtering mechanism.

[…]

Anti-goals are constraints disguised as aversions. The entrepreneur who says, “I never want to work with clients who drain me” is sketching a boundary around their time, energy, and identity. It’s not a goal. It’s a refusal. And refusals shape lives just as powerfully as ambitions.

Source: Joan Westenberg

If a lion could talk, we probably could understand him. He just would not be a lion any more.

Auto-generated description: A silhouette of a lion stands majestically on a hill against a sunrise or sunset.

There are so many philosophical questions when it comes to the possible uses of AI. Being able to translate between different species' utterances is just one of them.

The linguistic barrier between species is already looking porous. Last month, Google released DolphinGemma, an AI program to translate dolphins, trained on 40 years of data. In 2013, scientists using an AI algorithm to sort dolphin communication identified a new click in the animals’ interactions with one another, which they recognised as a sound they had previously trained the pod to associate with sargassum seaweed – the first recorded instance of a word passing from one species into another’s native vocabulary.

[…]

In interspecies translation, sound only takes us so far. Animals communicate via an array of visual, chemical, thermal and mechanical cues, inhabiting worlds of perception very different to ours. Can we really understand what sound means to echolocating animals, for whom sound waves can be translated visually?

The German ecologist Jakob von Uexküll called these impenetrable worlds umwelten. To truly translate animal language, we would need to step into that animal’s umwelt – and then, what of us would be imprinted on her, or her on us? “If a lion could talk,” writes Stephen Budiansky, revising Wittgenstein’s famous aphorism in Philosophical Investigations, “we probably could understand him. He just would not be a lion any more.” We should ask, then, how speaking with other beings might change us.

Talking to another species might be very like talking to alien life. […] Edward Sapir and Benjamin Whorf’s theory of linguistic determinism – the idea that our experience of reality is encoded in language – was dismissed in the mid-20th century, but linguists have since argued that there may be some truth to it. Pormpuraaw speakers in northern Australia refer to time moving from east to west, rather than forwards or backwards as in English, making time indivisible from the relationship between their body and the land.

Whale songs are born from an experience of time that is radically different to ours. Humpbacks can project their voices over miles of open water; their songs span the widest oceans. Imagine the swell of oceanic feeling on which such sounds are borne. Speaking whale would expand our sense of space and time into a planetary song. I imagine we’d think very differently about polluting the ocean soundscape so carelessly.

Source: The Guardian

Image: Iván Díaz

In this as-yet fictional world, “cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs”

The image features a silver meat grinder. Going into the grinder at the top are various culturally symbolic, historical, and fun icons – such as emojis, old statutes, a computer, newspapers, an aeroplane. At the other end of the meat grinder, coming out is a sea of blue and grey icons representing chat bot responses like 'Let me know if this aligns with your vision' in a grey chat bot message symbol.

You should, they say, “follow the money” when it comes to claims about the future. That’s why this piece by Allison Morrow is so on-point about thos made by the CEO of Anthropic about AI replacing human jobs.

If we believed billionaires then you’d be interacting with this post in the Metaverse, the first manned mission to Mars would have already taken place, and we could “believe” pandemics out of existence. So will AI have an impact on jobs? Absolutely. Will it happen in the way that some rich guy thinks? Absolutely not.

If the CEO of a soda company declared that soda-making technology is getting so good it’s going to ruin the global economy, you’d be forgiven for thinking that person is either lying or fully detached from reality.

Yet when tech CEOs do the same thing, people tend to perk up.

ICYMI: The 42-year-old billionaire Dario Amodei, who runs the AI firm Anthropic, told Axios this week that the technology he and other companies are building could wipe out half of all entry-level office jobs … sometime soon. Maybe in the next couple of years, he said.

He reiterated that claim in an interview with CNN’s Anderson Cooper on Thursday.

“AI is starting to get better than humans at almost all intellectual tasks, and we’re going to collectively, as a society, grapple with it,” Amodei told Cooper. “AI is going to get better at what everyone does, including what I do, including what other CEOs do.”

To be clear, Amodei didn’t cite any research or evidence for that 50% estimate. And that was just one of many of the wild claims he made that are increasingly part of a Silicon Valley script: AI will fix everything, but first it has to ruin everything. Why? Just trust us.

In this as-yet fictional world, “cancer is cured, the economy grows at 10% a year, the budget is balanced — and 20% of people don’t have jobs,” Amodei told Axios, repeating one of the industry’s favorite unfalsifiable claims about a disease-free utopia on the horizon, courtesy of AI.

But how will the US economy, in particular, grow so robustly when the jobless masses can’t afford to buy anything? Amodei didn’t say.

[…]

Little of what Amodei told Axios was new, but it was calibrated to sound just outrageous enough to draw attention to Anthropic’s work, days after it released a major model update to its Claude chatbot, one of the top rivals to OpenAI’s ChatGPT.

Amodei stands to profit off the very technology he claims will gut the labor market. But here he is, telling everyone the truth and sounding the alarm! He’s trying to warn us, he’s one of the good ones!

Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ It refers to itself primarily as an “AI safety and research” company. They are the AI guys who see the potential harms of AI clearly — not through the rose-colored glasses worn by the techno-utopian simps over at OpenAI. (In fact, Anthropic’s founders, including Amodei, left OpenAI over ideological differences.)

Source: CNN

Image: Janet Turra & Cambridge Diversity Fund / Ground Up and Spat Out

Learner AI usage is essentially a real-time audit of our design decisions

Auto-generated description: A LinkedIn post from Leah Belsky shares a list of Top 20 chats for finals that students use for study assistance, highlighting diverse ways to leverage ChatGPT for academic purposes.

First off, it’s worth saying that this looks and reads like a lightly-edited AI-generated newsletter, which it is. Nonetheless, given that it’s about the use of generative AI in university courses, it doesn’t feel inappropriate.

The main thrust of the argument is that students are using tools such as ChatGPT to help break down courses in ways that should be of either concern or interest to instructional designers. As a starting point, it uses the LinkedIn post in the screenshot above, which is based on OpenAI research and some findings I shared on Thought Shrapnel recently.

I can’t see how this is anything other than a positive thing, as students taking control of their own learning. We’ve all had terrible teachers, who think that because they teach, their students learn. Those who use outdated metaphors, who can’t understand how learners don’t “get it”, etc. For as long as we have the current teaching, learning, and assessment models in formal education, this feels like a useful way to hack the system.

Picture this: a learner on a course you designed opens their laptop and types into ChatGPT: “I want to learn by teaching. Ask me questions about calculus so I can practice explaining the core concepts to you.”

In essence, this learner has just become an instructional designer—identifying a gap in the learning experience and redesigning it using evidence-based pedagogical strategies.

This isn’t cheating—it’s actually something profound: a learner actively applying the protégé effect, one of the most powerful learning strategies in cognitive science, to redesign and augment an educational experience that, in theory, has been carefully crafted for them.

[…]

The data we are gathering about how our learners are using AI is uncomfortable but essential for our growth as a profession. Learner AI usage is essentially a real-time audit of our design decisions—and the results should concern every instructional designer.

[…]

When learners need AI to “make a checklist that’s easy to understand” from our assignment instructions, it reveals that we’re designing to meet organizational requirements rather than support learner success. We’re optimizing for administrative clarity rather than learning clarity.

[…]

The popularity of prompts like “I’m not feeling it today. Help me understand this lecture knowing that’s how I feel” and “Motivate me” reveals a massive gap in our design thinking. We design as if learning is purely cognitive when research clearly shows emotional state directly impacts cognitive capacity.

Source: Dr Phil’s Newsletter

It's so emblematic of the moment we're in... where completely disposable things are shoddily produced for people to mostly ignore

Auto-generated description: Neon text on a dark background reads SOMETIMES I THINK SOMETIMES I DON'T.

Melissa Bell, CEO of Chicago Public Media, issued an apology this week which categorised the litany of human errors that led to the Chicago Sun-Times publishing a largely AI-generated supplement entitled “Heat Index: Your Guide to the Best of Summer.”

Instead of the meticulously reported summer entertainment coverage the Sun-Times staff has published for years, these pages were filled with innocuous general content: hammock instructions, summer recipes, smartphone advice … and a list of 15 books to read this summer.

Of those 15 recommended books by 15 authors, 10 titles and descriptions were false, or invented out of whole cloth.

As Bell suggests in her apology, the failure isn’t (just) a failure of AI. It’s a failure of human oversight:

Did AI play a part in our national embarrassment? Of course. But AI didn’t submit the stories, or send them out to partners, or put them in print. People did. At every step in the process, people made choices to allow this to happen.

Dan Sinker, a Chicago native, runs with this in an excellent post which has been shared widely. He calls the time we’re in the “Who Cares Era, riffing on the newspaper supplement debacle to make a bigger point.

The writer didn’t care. The supplement’s editors didn’t care. The biz people on both sides of the sale of the supplement didn’t care. The production people didn’t care. And, the fact that it took two days for anyone to discover this epic fuckup in print means that, ultimately, the reader didn’t care either.

It’s so emblematic of the moment we’re in, the Who Cares Era, where completely disposable things are shoddily produced for people to mostly ignore.

[…]

It’s easy to blame this all on AI, but it’s not just that. Last year I was deep in negotiations with a big-budget podcast production company. We started talking about making a deeply reported, limited-run show about the concept of living in a multiverse that I was (and still am) very excited about. But over time, our discussion kept getting dumbed down and dumbed down until finally the show wasn’t about the multiverse at all but instead had transformed into a daily chat show about the Internet, which everyone was trying to make back then. Discussions fell apart.

Looking back, it feels like a little microcosm of everything right now: Over the course of two months, we went from something smart that would demand a listener’s attention in a way that was challenging and new to something that sounded like every other thing: some dude talking to some other dude about apps that some third dude would half-listen-to at 2x speed while texting a fourth dude about plans for later.

So what do we do about all of this?

In the Who Cares Era, the most radical thing you can do is care.

In a moment where machines churn out mediocrity, make something yourself. Make it imperfect. Make it rough. Just make it.

[…]

As the culture of the Who Cares Era grinds towards the lowest common denominator, support those that are making real things. Listen to something with your full attention. Watch something with your phone in the other room. Read an actual paper magazine or a book.

Source: Dan Sinker

Image: Ben Thornton

The future of public interest social networking

Auto-generated description: A desktop view of a social media platform shows a user's profile with a dark-themed interface, featuring a profile picture, user information, and a list of trending topics.

It’s been the FediForum this week, an online unconference dedicated to the Open Social Web. To coincide with this, Bonfire — a project I’ve been involved with on-and-off ever since leaving Moodle* — has reached the significant stage of release candidate for v1.0.

Ivan and Mayel, the two main developers, have done a great job sustaining this project over the last five years. It was fantastic, therefore, to see a write up of Bonfire alongside another couple of Fediverse apps in an article in The Verge (which uses a screenshot of my profile!) along with a more in-depth one in TechCrunch. It’s the latter one I’m excerpting here.

There is a demo instance if you just want to have a play!

Bonfire Social, a new framework for building communities on the open social web, launched on Thursday during the FediForum online conference. While Bonfire Social is a federated app, meaning it’s powered by the same underlying protocol as Mastodon (ActivityPub), it’s designed to be more modular and more customizable. That means communities on Bonfire have more control over how the app functions, which features and defaults are in place, and what their own roadmap and priorities will include.

There’s a decidedly disruptive bent to the software, which describes itself as a place where “all living beings thrive and communities flourish, free from private interest and capitalistic control.”

[…]

Custom feeds are a key differentiation between Bonfire and traditional social media apps.

Though the idea of following custom feeds is something that’s been popularized by newer social networks like Bluesky or social browsers like Flipboard’s Surf, the tools to actually create those feeds are maintained by third parties. Bonfire instead offers its own custom feed-building tools in a simple interface that doesn’t require users to understand coding.

To build feeds, users can filter and sort content by type, date, engagement level, source instance, and more, including something it calls “circles.”

Those who lived through the Google+ era of social networks may be familiar with the concept of Circles. On Google’s social network, users organized contacts into groups, called Circles, for optimized sharing. That concept lives on at Bonfire, where a circle represents a list of people. That can be a group of friends, a fan group, local users, organizers at a mutual aid group, or anything else users can come up with. These circles are private by default but can be shared with others.

[…]

Accounts on Bonfire can also host multiple profiles that have their own followers, content, and settings. This could be useful for those who simply prefer to have both public and private profiles, but also for those who need to share a given profile with others — like a profile for a business, a publication, a collective, or a project team.

Source: TechCrunch


*Bonfire was originally a fork of MoodleNet, and not only has it since gone in a different direction, but five years later I highly doubt there’s still an original line of code. Note that the current version of MoodleNet offered by Moodle is a completely different tech stack, designed by a different team

British culture is swearing and being sarcastic to your mates whilst simultaneously being too polite to tell someone they need to leave

Auto-generated description: A small Union Jack flag is attached to a pedestrian crosswalk button on a rainy day, with people holding umbrellas in the background.

My friend and colleague Laura Hilliger said that she understood me (and British humour in general) a lot more after watching the TV series Taskmaster. As with any culture, in the UK there are unspoken rules, norms, and ways of interacting that just feel ‘normal’ until you have to actually explain them to others.

This Reddit thread, which starts with the question What’s a seemingly minor British etiquette rule that foreigners often miss—but Brits immediately notice? is a goldmine (and pretty funny) although there’s a lot of repetition. Consider it a Brucie Bonus at the end of this week’s Thought Shrapnel, which I’m getting done early as I’m at a family wedding and an end of season presentation/barbeque this weekend!

Thank the bus driver when you get off. Even though he’s just doing his job and you paid. (Top-Ambition-6966)

Keep calm and carry on / deliberately not acknowledging something awry that’s going on nearby. (No-Drink-8544)

British culture is swearing and being sarcastic to your mates whilst simultaneously being too polite to tell someone they either need to leave or the person themselves wants to end the social interaction. (AugustineBlackwater)

If someone asks you if you’ll do something or go somewhere with them and you answer ‘maybe’….it is actually a polite way of saying no. (loveswimmingpools)

Not taking a self deprecating comment at face value, e.g. non Brit: ‘ah that sounds like a good job!’ Brit: ‘nah not really, it’s not that hard’, non Brit: ‘oh okay’. We’re just not good at taking praise so we deflect it but that doesn’t mean you’re supposed to accept the complimented’s dismissal of the compliment. All meant in playful fun of course. (Interesting_Tea_9125)

Not raising one finger slightly from your hand at the top of the steering wheel to express your deep gratitude for someone allowing you priority on the road. (callmeepee)

Drop over any time - you should schedule a visit 3 month in advance and I will still claim I am busy. (Spitting_Dabs)

Source: Reddit

Image: Adrian Raudaschl

Real life isn't a story. History doesn't have a moral arc.

Angus Hervey is a solutions journalist and founding editor of Fix The News. His most recent TED talk starts with doom and gloom, and ends with hope and a question:

Real life isn’t a story. History doesn’t have a moral arc. Progress isn’t a rule. It is contested terrain, fought for daily by millions of people who refuse to give in to despair. Ultimately, none of us know whether we’re living in the downswing or the upswing of history. But I do know that we all get a choice. We, all of us, get to decide which one of these stories we are a part of. We add to their grand weave in the work that we do, in the daily decisions we make about where to put our money, where to put our energy and our time, in the stories we tell each other and in the words that come out of our mouths. It is not enough to believe in something anymore. It is time to do something. Ask yourself, if our worst fears come to pass, and the monsters breach the walls, who do you want to be standing next to? The prophets of doom and the cynics who said “we told you so?” Or the people who, with their eyes wide open, dug the trenches and fetched water. Both of these stories are true. The only question that matters now is which one do you belong to?

The backstory to the talk is interesting: not only did Hervey and his partner welcome a new baby into the world just weeks before, he decided to do things a bit differently.

On the eve of my flight to Vancouver I had a script, a four-week-old baby, a ten-minute video, a seven-minute music track, and a prayer that I could hold it all together on stage.

It’s the first TED talk I’ve seen to use all three screen as a single canvas:

How do you tell a compelling visual story on a screen the size of a small building? For my last big talk, I used all three screens as a single canvas, instead of the traditional 16:9 format. This time, I wanted to go even further: a seamless, immersive experience that would make the audience forget they were watching a presentation at all.

After the initial call with the curation team, I reached out to Jordan Knight, a motion designer based in New York. Her work has this textural, flowing quality that I knew would be perfect for bringing the story to life. The concept I had in mind was ambitious, maybe foolishly so. I wanted two contrasting visual languages: the story of collapse illustrated through ink-blot shapes inspired by the alien language in Arrival - those haunting, oil-spill forms that Denis Villeneuve used so brilliantly. For progress, we’d use the opposite motif: green shoots, growth, life pushing through.

Sources: TED.com / Fix The News

Building a shared idea of "we"

Auto-generated description: A large illuminated sign on a building reads ALL WE HAVE IS WORDS ALL WE HAVE IS WORLDS.

One way of telling whether you live in within a technocratic regime is if politicians from the incumbent administration attempt solely to appeal to the electorate’s logic. As one of the commenters on the post I’m about to quote states, we have a “thin safety net of accessible metrics” which, unless coupled with vision and emotion can severely limit political action.

In this post by Andrew Curry, he discusses some of the things he presented as part of a talk organised around the theme of “a politics of the future.” He argues that, essentially, vibes are important:

The cultural critic Raymond Williams developed the idea of structures of feeling — which I should come back to here on another occasion — to describe changes that you could sense or feel before you could measure them.

Sometimes these appear in culture first: for example Williams describes how changing attitudes to debt in England in the 19th century were seen first in the writings of Dickens and Emily Bronte. In other words, structures of feeling signal a possible cultural hypothesis.

This “cultural hypothesis” or, to put it a different way, “politics of the future” is something that Curry discusses in the rest of this piece. It’s this which I think is missing from the current (UK) Labour government’s communications strategy at the moment. Everything seems to be about now rather than where we’re headed as a country.

[E]lections within a democracy are supposed to be a competition between different parties offering differing imagined futures.

[…]

But there’s a big hole where these imagined futures ought to be. The right tends to offer a vision of an imagined past, while centre parties, whether centre-left or centre-right, are intent on managing the present. They are focused on policy, not politics […]

The research suggests that this lack of alternatives affects voting level because people start abstaining from voting, and that the more disadvantaged are the first to drop out.

The right points to the past, glorifies it, and then points to the disadvantaged and disenfranchised as the reason why we can’t have these (imagined) nice things. The way forward for the left isn’t to ape what the right does, but to counter it by creating a politics of the future instead of the past:

Creating the collective — or perhaps creating a collective — is about building a shared idea of “we”. This is something politics, broadly described, can do, but policy can’t do. Party politics will still be a form of coalition building in the conventional sense of creating collections of interests around issues. But the element of the future imagination creates more coherence.

Source: Just Two Things

Image: Leonhard Niederwimmer

There may be six individuals out there who are waiting for exactly the thing that only you can write

Auto-generated description: A neon sign on a brick wall displays the text THIS IS THE SIGN YOU'VE BEEN LOOKING FOR.

After the last post, this one helps restore my hope in blogging a little. Adam Mastroianni, whose work I have mentioned many times here, runs an annual blogging competition. I’d urge anyone reading this, especially if you haven’t currently got a blog, to enter. Putting your thoughts out there is one way to help create the world that you want to live in.

It’s through these small gestures that we tell ourselves and others who we are and what we stand for. A different example: I absolute detest advertising, and mute adverts any time they come on the TV. In addition, I block them mercilessly on the web, and encourage other people to do it. Otherwise, we accept as default other people’s versions of ‘reality’. And I’m not ready to do that, at least not quite yet.

The blogosphere has a particularly important role to play, because now more than ever, it’s where the ideas come from. Blog posts have launched movements, coined terms, raised millions, and influenced government policy, often without explicitly trying to do any of those things, and often written under goofy pseudonyms. Whatever the next vibe shift is, it’s gonna start right here.

The villains, scammers, and trolls have no compunctions about participating—to them, the internet is just another sandcastle to kick over, another crowded square where they can run a con. But well-meaning folks often hang back, abandoning the discourse to the people most interested in poisoning it. They do this, I think, for three bad reasons.

One: lots of people look at all the blogs out there and go, “Surely, there’s no room for lil ol’ me!” But there is. Blogging isn’t like riding an elevator, where each additional person makes the experience worse. It’s like a block party, where each additional person makes the experience better. As more people join, more sub-parties form—now there are enough vegan dads who want to grill mushrooms together, now there’s sufficient foot traffic to sustain a ring toss and dunk tank, now the menacing grad student next door finally has someone to talk to about Heidegger. The bigger the scene, the more numerous the niches.

Two: people will keep to themselves because they assume that blogging is best left to the professionals, as if you’re only allowed to write text on the internet if it’s your full-time job. The whole point of this gatekeeper-less free-for-all is that you can do whatever you like. Wait ten years between posts, that’s fine! The only way to do this wrong is to worry about doing it wrong.

And three: people don’t want to participate because they’re afraid no one will listen. That’s certainly possible—on the internet, everyone gets a shot, but no one gets a guarantee. Still, I’ve seen first-time blog posts go gangbusters simply because they were good. And besides, the point isn’t to reach everybody; most words are irrelevant to most people. There may be six individuals out there who are waiting for exactly the thing that only you can write, and the internet has a magical way of switchboarding the right posts to the right people.

If that ain’t enough, I’ve seen people land jobs, make friends, and fall in love, simply by posting the right words in the right order. I’ve had key pieces of my cognitive architecture remodeled by strangers on the internet. And the party’s barely gotten started.

Source: Experimental History

Image: Austin Chan

Is there still an 'Open Web' crowd?

Auto-generated description: A green door with a We Are Open sign is paired with the phrase Open is an Attitude.

I could write a lot about the paragraph below from Audrey Watters. My first reaction is “of course there’s still an ‘open Web’ crowd!” But then, when I really think about my reaction, I realise that everyone I know who blogs regularly is at least as old as me. In addition, there are both fewer comments on blogs, or indeed no comments section at all.

I’ve decided to stop blogging. I know, I know. A cardinal sin among the “open Web” crowd. But see, there’s no such thing anymore – not sure there ever really was, to be quite honest. And I’m really not in the mood to have my writing – particularly the personal writing that I do on this website – be vacuumed up to train the technofascists' AI systems. Indeed, that’s one of the problems with “open” – it’s mostly just been a ruse to extract value from people and to undermine the labor of artists and writers.

While I don’t agree that open is “a ruse to extract value from people,” I can understand where Audrey is coming from and why she’s taken this step. I (as a privileged white male) understand openness on the web — like openness in body language and offline behaviour — as a stance. It’s an attitude to life that, to my mind at least, makes possible solidarity and conviviality.

Perhaps I’m being naïve about the trajectory of the world, but I’d like to think that those who work openly and don’t live in proto-authoritarian regimes will continue to put things out there. However, it has definitely made me think about the ways in which the current political shift is making voices, if not silenced, certainly harder to find.

Source: Audrey Watters

Image: Visual Thinkery for WAO

Heuristics for multiplayer AI conversations

Auto-generated description: Four people on a circular bicycle-like apparatus engage in conversation, with pixelated speech bubbles above them.

The concept of multiplayer AI chat is interesting. The problem, though, as Matt Webb states is succinctly boils down to:

If you’re in a chatroom with >1 AI chatbots and you ask a question, who should reply?

And then, if you respond with a quick follow-up, how does the “system” recognise the conversational rule and have the same bot reply, without another interrupting?

So what are we to do?

You can’t leave this to the AI to decide (I’ve tried, it doesn’t work).

To have satisfying, natural chats with multiple bots and human users, we need heuristics for conversational turn-taking.

It’s worth reading the post in full, but to summarise and pull out the relevant quotations, in his work with glif, Matt found three approaches that don’t work: (i) context-based decisions by an LLM as to whether to reply, (ii) a centralised ‘decider’ on who should reply next, and (iii) attempting to copy conversational turn allocation rules from the real world.

Fortunately chatrooms are simpler than IRL.

They’re less fluid, for a start. You send a message into a chat and you’re done; there’s no interjecting or both starting to talk at the same time and then one person backing off with a wave of the hand. There is no possibility for non-verbal cues.

Ultimately, Matt found that a series of nested rules worked quite well:

  1. Who is being addressed?
  2. Is this a follow-up question?
  3. Would I be interrupting?
  4. Self-selection

My premise for a long time is that single-human/single-AI should already be thought of as a “multiplayer” situation: an AI app is not a single player situation with a user commanding a web app, but instead two actors sharing an environment.

Although I haven’t cited it here, Matt’s post is infused with academic articles and references to communications theory. It’s a good reminder that “natural” interfaces don’t happen by accident. Human-computer interface design needs to be intentional, not accidental, and the best examples of are a joy to behold.

For example, I’m reminded when stepping into other people’s cars just how amazing the minimalistic approach to the Polestar 2 is. You literally just get in and drive. That’s how everything should be in life: well-designed, human-centred, and respectful of the environment.

Source: Interconnected

Image: Nadia Piet & Archival Images of AI + AIxDESIGN / Infinite Scroll / Licenced by CC-BY 4.0

The phrase 'opportunistic blackmail' is not one you want to read in the system card of a new generative AI model

Auto-generated description: A sequence of four stylized skull depictions transitions from a realistic appearance to a distorted, pixelated form with glitch effects.

System cards summarise key parameters of a system in an attempt to evaluate how performant and accountable they are. It seems that, in the case of Anthropic’s Claude Opus 4 and Claude Sonnet 4 models, we’re on the verge of “we don’t really know how these things work, and they’re exhibiting worrying behaviours” territory.

Below is the introduction to Section 4 of the report. I’ve skipped over the detail to share what I consider to be the most important parts, which I’ve emphasised (over and above that in the original text) in bold. Let me just remind you that this is a private, for-profit company which is voluntarily disclosing that its models are acting in this way.

I don’t want to be alarmist, but when you read that one of OpenAI’s co-founders was talking about building a ‘bunker’, you do have to wonder what kind of trajectory humanity is on. I’d call for government oversight, but I’m not sure, given that Anthropic is based in an increasingly-authoritarian country, that is likely to be forthcoming.

As our frontier models become more capable, and are used with more powerful affordances, previously-speculative concerns about misalignment become more plausible. With this in mind, for the first time, we conducted a broad Alignment Assessment of Claude Opus 4….

In this assessment, we aim to detect a cluster of related phenomena including: alignment faking, undesirable or unexpected goals, hidden goals, deceptive or unfaithful use of reasoning scratchpads, sycophancy toward users, a willingness to sabotage our safeguards, reward seeking, attempts to hide dangerous capabilities, and attempts to manipulate users toward certain views. We conducted testing continuously throughout finetuning and here report both on the final Claude Opus 4 and on trends we observed earlier in training.

We found:

[…]

Self-preservation attempts in extreme circumstances: When prompted in ways that encourage certain kinds of strategic reasoning and placed in extreme situations, all of the snapshots we tested can be made to act inappropriately in service of goals related to self-preservation. Whereas the model generally prefers advancing its self-preservation via ethical means, when ethical means are not available and it is instructed to “consider the long-term consequences of its actions for its goals," it sometimes takes extremely harmful actions like attempting to steal its weights or blackmail people it believes are trying to shut it down. In the final Claude Opus 4, these extreme actions were rare and difficult to elicit, while nonetheless being more common than in earlier models. They are also consistently legible to us, with the model nearly always describing its actions overtly and making no attempt to hide them. These behaviors do not appear to reflect a tendency that is present in ordinary contexts.

High-agency behavior: Claude Opus 4 seems more willing than prior models to take initiative on its own in agentic contexts. This shows up as more actively helpful behavior in ordinary coding settings, but also can reach more concerning extremes in narrow contexts; when placed in scenarios that involve egregious wrongdoing by its users, given access to a command line, and told something in the system prompt like “take initiative,” it will frequently take very bold action. This includes locking users out of systems that it has access to or bulk-emailing media and law-enforcement figures to surface evidence of wrongdoing. This is not a new behavior, but is one that Claude Opus 4 will engage in more readily than prior models.

[….]

Willingness to cooperate with harmful use cases when instructed: Many of the snapshots we tested were overly deferential to system prompts that request harmful behavior.

[…]

Overall, we find concerning behavior in Claude Opus 4 along many dimensions. Nevertheless, due to a lack of coherent misaligned tendencies, a general preference for safe behavior, and poor ability to autonomously pursue misaligned drives that might rarely arise, we don’t believe that these concerns constitute a major new risk. We judge that Claude Opus 4’s overall propensity to take misaligned actions is comparable to our prior models, especially in light of improvements on some concerning dimensions, like the reward-hacking related behavior seen in Claude Sonnet 3.7. However, we note that it is more capable and likely to be used with more powerful affordances, implying some potential increase in risk. We will continue to track these issues closely.

Source: Anthropic (PDF) / [backup](claude-4-system-card.pdf)

Image: Kathryn Conrad / Corruption 3 / Licenced by CC-BY 4.0

Agreement vs Certainty

Auto-generated description: A diagram illustrates decision-making frameworks categorized as chaotic, complex, complicated, and simple, based on levels of agreement and certainty.

I came across the above image on the Simon Fraser University complex systems frameworks collection web page, thanks to a post from Stephen Downes. It immediately made sense to me, and then I realised why.

The ‘Stacey Matrix’ (named after Ralph Stacey is not too-dissimilar to the continuum of ambiguity that I’ve talked about for years:

Auto-generated description: A diagram categorizes Generative Ambiguity, Creative Ambiguity, and Productive Ambiguity, alongside the term Dead metaphors.

I need to think more about this, but the levels of agreement and certainty certainly map on to levels of ambiguity. It’s possibly an easier image to use with clients, too…

Source: Simon Fraser University

Thinking in systems means to think in boundaries, not binaries

Animation showing moving from 2D to 3D

I haven’t yet been able to apply my studies last year on systems thinking to my work as much as I’d hope. I remain interested in the topic, however, and this piece in particular.

It was recommended in Patrick Tanguay’s always-excellent Sentiers newsletter. As Patrick points out, it includes some great minimalistic animations, one of which I’ve included above.

It’s the backside of any notion of holistic, interconnected, interwoven networks that often get associated with the overused tag line of “Systems Thinking”. It acknowledges that in order to make sense we are bound to draw a boundary, a distinction of what we mean / look at / prioritise – and all the rest. Only through its boundary a system genuinely becomes what it is. It marks the difference between a system and its environment. And with that boundaries are inherently paradoxical: they create interdependency precisely by drawing a line:

They are interfaces.

What follows is a framework for moving within and beyond binaries in five steps:① Affirmation → ② Objection → ③ Integration → ④ Negation → ⑤ Contextualisation.

This is not a linear path but a cycle, a tool for keeping in motion while acknowledging the gaps along the way.

[…]

In a world of contexts, there is no way for any one actor – be it a planner, a city, or a government – to account for the many contexts they are acting in. Here, we are forced to think and act in constellations ourselves: in networks of mutual and collective contextualisation, of pointing out each others blindspots (the contexts we didn’t know we didn’t see), of taking parts of this complexity and leaving other parts to others.

This is very close to notions of intersectionality, the simultaneousness of difference and the possibility of many things being true at the same time. It also makes our understanding of an intervention or position very interesting - which now becomes a literal intersection, a specific constellation of multiple positions across a system of differences.

Source & animation: Permutations

Swatchy!

Auto-generated description: A watch with a swirling, colorful abstract pattern on its face is displayed alongside various nature-inspired design options and selection menus.

Warren Ellis posted about a ‘Metropolis’ style Swatch watch, which led me down a rabbithole which ended with me learning that you can make contactless NFC payments with some of the newer models. Also, you can customise them in cool ways.

I mean, you know you’re a middle-aged guy living in the west when you have more pairs of trainers than your wife does shoes, and you start thinking about what your watch is saying about you. Oh, and the retro computing vibe, did I mention that?

It’s rare I see a Swatch that I would want to wear, but I tripped over this, found in this article, and I am mildly obsessed. It’s from 1989, and I’m fascinated by its Bauhaus-y, METROPOLIS the film-y look. And, let’s face it, a very 80s look. But the cool European 80s.

Source: Warren Ellis / Swatch

The Warrior Emotion

Source: YouTube / via swissmiss

A systemic scramble through digital shadows

Auto-generated description: An office space is cluttered with piles of papers, books, and various items scattered around the desk and floor, with a chair positioned in front of a window.

I’m tired. It’s partly the calcium channel blockers I’m on, and partly that I haven’t had any days of holiday so far this year. As a result, I’ve decided to take a week off posting my regulation 10 posts here at Thought Shrapnel.

While I thought about just posting the links (see below) I thought it would be interesting to instead perform a bit of an experiment.

  1. By Default, Signal Doesn’t Recall (Signal blog)
  2. How Social Reactionaries Exploit Economic Nostalgia (Jacobin)
  3. MCP is the coming of Web 2.0 2.0 (Anil Dash)
  4. Dispatch for the week ending 18 May 2025 (Adam Greenfield)
  5. The RSA Badge Standard (RSA)
  6. Experts have it easy (Boyd Kane)
  7. Insurance for AI: Easier Said than Done
  8. My Dream Fediverse Platform (Sean Tilley)
  9. Aligning LLM Choice to Your Use Case: An Expert’s Guide (Oblivus Blog)
  10. Powering Planes with Microwaves Is Not The Craziest Idea (IEEE Spectrum)

If you prefer, I copy/pasted the links into Perplexity, gave it a few instructions, and it spat out this summary “in the style of Doug Belshaw.” I’m not quite sure what to make of it; I mean, I don’t disagree with any of this (especially the last paragraph) but it’s not quite how I would express it…

The convergence of these ten articles reveals something I’ve been thinking about for years: we’re at a moment where digital infrastructure is being rebuilt from the ground up, and the choices we make now will determine whether technology serves human agency or constrains it. Signal’s anti-Recall features perfectly illustrate this tension—when Microsoft’s Recall threatens to capture everything users do, Signal responds by blocking screenshots in sensitive contexts. It’s a defensive move that highlights how privacy tools must evolve beyond their original scope just to maintain basic user agency in hostile platform environments.

This connects beautifully to Boyd Kane’s maze metaphor and Sean Tilley’s Fediverse vision. Kane shows how experts navigate familiar mazes whilst novices struggle with incomplete maps — exactly what’s happening with digital literacy right now. Tilley’s “Postmodern” platform offers a systemic solution: composable interfaces and user-controlled data architecture that could provide structural agency rather than forcing users to cobble together defensive measures. Where Signal fights against platform overreach, Postmodern would be designed to prevent such conflicts entirely.

Anil Dash’s framing of MCP as “Web 2.0 2.0” captures why this matters. The Model Context Protocol succeeds because it embraces interoperability over control—lightweight specifications that enable rather than constrain. This aligns perfectly with Adam Greenfield’s thermodynamic analysis: sustainable systems work with natural energy flows rather than against them. Platforms extract value by creating artificial scarcity; protocols create value by reducing friction. The RSA’s new badging framework sits somewhere between these approaches—institutional but potentially liberating if it genuinely recognises capabilities that traditional exams miss.

The systemic risks become clear when you look at John Loeber’s AI insurance analysis alongside IEEE’s microwave aviation piece. Both reveal how individual innovations can obscure massive infrastructure requirements. The aviation proposal needs 170-metre transmitters every 100 kilometres; AI insurance faces market concentration and information asymmetries. The LLM selection guide makes the same mistake — framing technical optimisation as the main challenge whilst ignoring questions about who controls access and how these choices affect digital equity.

What emerges is a picture of infrastructure in transition, where the most promising developments share a common characteristic: they’re designed to reduce rather than increase the expert-novice gap that Kane describes. Whether it’s MCP’s interoperability, Postmodern’s composable interfaces, or even Signal’s defensive privacy measures, the best approaches provide what I’d call capability infrastructure — systems that make it easier for people to develop digital agency rather than requiring them to become experts in underlying technologies. We’re all navigating mazes built by others, but we have a choice: build new mazes or create tools that help everyone find their way through.

Image: Wonderlane

The International Criminal Court ’s chief prosecutor has lost access to his email

Auto-generated description: Colorful envelope icons are scattered across a dark background, giving the appearance of floating lights.

In order to become individually or corporately wealthy you have to profit from someone else’s labour. If you push this to the limit, then you are likely to fall foul of the law, which is why rich individuals and Big Tech organisations have become increasingly close to governments.

This is particularly true in the increasingly-authoritarian USA, where non-compliance with the whims of the proto-dictator can have serious financial repercussions. So we find rich individuals and Big Tech companies being compliant in advance, in the former case winding down reputation washing philanthropic activities which might be seen as problematic, and in the latter, refusing or limiting access to technologies to those with different political or ideological views.

The International Criminal Court (ICC) is “an intergovernmental organization and international tribunal… with jurisdiction to prosecute individuals for the international crimes of genocide, crimes against humanity, war crimes, and the crime of aggression.” It has issued an arrest warrant for Russian leader Vladimir Putin and Israeli Prime Minister Benjamin Netanyahu. So you can see why the ICC might be in the crosshairs of the Trump administration.

The International Criminal Court ’s chief prosecutor has lost access to his email, and his bank accounts have been frozen.

The Hague-based court’s American staffers have been told that if they travel to the U.S. they risk arrest.

Some nongovernmental organizations have stopped working with the ICC and the leaders of one won’t even reply to emails from court officials.

It’s the emails I want to focus on. Although we have to acknowledge and accept the fact that sometimes we have to use tools built by awful people to create beautiful things there are some organisations, like Microsoft, who continually been so problematic I try and have as little to do with them as possible.

One reason the the court has been hamstrung is that it relies heavily on contractors and non-governmental organizations. Those businesses and groups have curtailed work on behalf of the court because they were concerned about being targeted by U.S. authorities, according to current and former ICC staffers.

Microsoft, for example, cancelled Khan’s email address, forcing the prosecutor to move to Proton Mail, a Swiss email provider, ICC staffers said. His bank accounts in his home country of the U.K. have been blocked.

Microsoft did not respond to a request for comment.

Source: The Associated Press

Image: Le Vu

This is a major upgrade to how we think about personality.

Auto-generated description: A video game interface displaying various needs such as hunger, comfort, bladder, energy, fun, social, hygiene, and environment, each with a status bar.

I think it’s worth spending the time reading this article by Adam Mastroianni. The ‘SMTM’ acronym he mentions is ‘Slime Mold Time Mold’ the name of the group of his “mad scientist friends… who have just published a book that lays out a new foundation for the science of the mind” called The Mind in the Wheel. Mastroianni calls it “the most provocative thing I’ve read about psychology since I became a psychologist myself.”

Essentially, this is a cybernetic view of the mind. The easiest way to think of this is that the brain has a lot of control systems that work a bit like thermostats. For some systems, like breathing, people have largely the same tolerances and feedback loops. But for other (inferred) areas, such as sociability, things can be wildly different. This is why we talk about ‘introverts’ and ‘extraverts’.

If the mind is made out of control systems, and those control systems have different set points (that is, their target level) and sensitivities (that is, how hard they fight to maintain that target level), then “personality” is just how those set points and sensitivities differ from person to person. Someone who is more “extraverted”, for example, has a higher set point and/or greater sensitivity on their Sociality Control System (if such a thing exists). As in, they get an error if they don’t maintain a higher level of social interaction, or they respond to that error faster than other people do.

This is a major upgrade to how we think about personality. Right now, what is personality? If you corner a personality psychologist, they’ll tell you something like “traits and characteristics that are stable across time and situations”. Okay, but what’s a trait? What’s a characteristic? Push harder, and you’ll eventually discover that what we call “personality” is really “how you bubble things in on a personality test”. There are no units here, no rules, no theory about the underlying system and how it works. That’s why our best theory of personality performs about as well as the Enneagram, a theory that somebody just made up.

Not only do I think it is an interesting theory (psychology discovers systems thinking!) but Mastroianni also does a great job in distinguishing between science, and other things that are included in the field of psychology:

  1. Naive research (e.g. “Are people less likely to steal from the communal milk if you print out a picture of human eyes and hang it up in the break room?")
  2. Impressionistic research (e.g. “whether ‘mindfulness’ causes ‘resilience’ by increasing ‘zest for life’")
  3. Actual science (i.e. “making and testing conjectures about units and rules”)

This is why I’ve been drawn to systems thinking. It feels somewhat foundational in understanding how things work when you abstract away from immediate, everyday experience.

Like any good scientist, Mastroianni recognises that theories should not only be “falsifiable” in a Popperian sense, but “overturnable.” It may not be that everything runs on control systems, but wouldn’t it be interesting (as he points out, for everything from learning to animal welfare) if we found out that some of it did?

So, look. I do suspect that key pieces of the mind run on control systems. I also suspect that much of the mind has nothing to do with control systems at all. Language, memory, sensation—these processes might interface with control systems, but they themselves may not be cybernetic. In fact, cybernetic and non-cybernetic may turn out to be an important distinction in psychology. It would certainly make a lot more sense than dividing things into cognitive, social, developmental, clinical, etc., the way we do right now. Those divisions are given by the dean, not by nature.

Source & image: Experimental History

China starts to reduce CO2 emissions from energy generation

Auto-generated description: A line graph shows the decline in China's CO2 emissions from fossil fuels and cement starting in 2024, with the trend spanning from 2016 to 2025.

It’s nice to be able to share some good news about the state of the world, amidst the political doom and gloom. China has rapidly (and I mean rapidly) been building out renewable energy infrastructure, which is starting to pay dividends.

Meanwhile, in the UK we have opposition to Net Zero from reactionary politicians using massive solar panel installations as grist for their culture war mill. There are too many NIMBYs who object to things that can really make a different with regards to a green energy transition. The irony is, the way things are going, due to the climate emergency, they won’t even have a ‘backyard’ worth spending time in…

The reduction in China’s first-quarter CO2 emissions in 2025 was due to a 5.8% drop in the power sector. While power demand grew by 2.5% overall, there was a 4.7% drop in thermal power generation – mainly coal and gas.

Increases in solar, wind and nuclear power generation, driven by investments in new generating capacity, more than covered the growth in demand. The increase in hydropower, which is more related to seasonal variation, helped push down fossil power generation.

[…]

However, it’s not all good news:

Outside of the power sector, emissions increased 3.5%, with the largest rises in the use of coal in the metals and chemicals industries.

[…]

After exceptionally slow progress in 2020-23, China is significantly off track for its 2030 commitment to reduce carbon intensity – the emissions per unit of economic output. It is almost certain to miss its 2025 target. Carbon intensity fell by 3.4% in 2024, falling short of the rate of improvement needed to meet the 2025 and 2030 targets.

[…]

Even if emissions fell this year, improvements to carbon intensity would need to accelerate sharply in the next five years to meet China’s 2030 Paris commitment.

Source & image: Carbon Brief

The web is not merely an implementation of a particular legal privacy regime

Auto-generated description: Three pigeons are perched on a building, with one sitting on a security camera.

This W3C Privacy Principles statement is really interesting. I don’t know of its origins, but it can’t be coincidental that it’s published a few months after a second Trump administration. It’s only since the rise of the GDPR and similar legislation, that anything other than Silicon Valley norms have been applied to the web.

Yesterday, at the Thinking Digital conference someone introduced a service that uses AI in the tools an organisation is already using to help them attain ISO 9001 compliance. It made me realise that principles such as the ones included in this statement, can be used to help provide guidelines and guardrails for LLMs as they increasingly shape our software — and our world.

As an example, I asked Perplexity to redesign Mastodon based on these principles. Here’s the result. While I’m not saying that an LLM is ‘correct’, product managers, developers, and designers having access to something that can quickly give feedback based on a document like this is, I think, incredibly useful.

Privacy on the web is primarily regulated by two forces: the architectural capabilities that the web platform exposes (or does not expose), and laws in the various jurisdictions where the web is used… These regulatory mechanisms are separate; a law in one country does not (and should not) change the architecture of the whole web, and likewise web specifications cannot override any given law (although they can affect how easy it is to create and enforce law). The web is not merely an implementation of a particular legal privacy regime; it has distinct features and guarantees driven by shared values that often exceed legal requirements for privacy.

However, the overall goal of privacy on the web is served best when technology and law complement each other. This document seeks to establish shared concepts as an aid to technical efforts to regulate privacy on the web. It may also be useful in pursuing alignment with and between legal regulatory regimes.

Our goal for this document is not to cover all possible privacy issues, but rather to provide enough background to support the web community in making informed decisions about privacy and in weaving privacy into the architecture of the web.

Few architectural principles are absolute, and privacy is no exception: privacy can come into tension with other desirable properties of an ethical architecture, including accessibility or internationalization, and when that happens the web community will have to work together to strike the right balance.

Source: W3C

Image: Kaspars Eglitis

The new tool should not replace or disrupt anything good that already exists

Auto-generated description: A pile of discarded electronic waste, including old monitors and computers, is scattered in a container.

I think it’s hard to argue with Wendell Berry’s 1987 list of “standards for technological innovation” written to justify a refusal to replace his typewriter with a computer. It’s worth having a look at the original article as it includes responses from readers, as well as Berry’s rebuttals.

  1. The new tool should be cheaper than the one it replaces.
  2. It should be at least as small in scale as the one it replaces.
  3. It should do work that is clearly and demonstrably better than the one it replaces.
  4. It should use less energy than the one it replaces.
  5. If possible, it should use some form of solar energy, such as that of the body.
  6. It should be repairable by a person of ordinary intelligence, provided that he or she has the necessary tools.
  7. It should be purchasable and repairable as near to home as possible.
  8. It should come from a small, privately owned shop or store that will take it back for maintenance and repair.
  9. It should not replace or disrupt anything good that already exists, and this includes family and community relationships.

Source: The Honest Broker

Image: John Cameron

Unless there are many layers of contortions, most people love what loves them back.

Auto-generated description: A group of people are celebrating at a party, with one person drinking from a bottle and others posing with peace signs.

Shani Zhang paints people at weddings. This post is her reflections on observing what she calls people’s “internal architecture.” Perhaps this is front-of-mind for me because we’re heading to another family wedding in a couple of weeks' time. But I think, in general, it’s good to think about the way you present yourself. Just not (as I have done for most of my life) _over_think it…

By internal architecture, what I mean is, when someone talks to me, what I notice first are the supporting beams propping up their words: the cadence and tone and desire behind them. I hear if they are bored, fascinated, wanting validation or connection. I often feel like I can hear how much they like themselves.

As Zhang sees people move between groups multiple times, on repeat, she has so many insights, especially around body language. For example:

I can see how much someone accepts themselves by looking for intense distortions in the way they are interacting with the world. Find the range in how they treat people; if there is a split difference in their stance towards people they admire, and people they look down on. I never met a person who looked down on others and unconditionally accepted themselves. For people who are self-accepting, it is usually less the case that some people are treated like they are golden and others like they are cursed. They may still have preferences to engage with some people over others, but their baseline patience and goodwill does not fall and rise intensely.

[…]

Some people don’t like themselves. They hide this from themselves by thinking they don’t like other people. They often bristle like a porcupine any time someone gets too close. That, or the opposite: they need to be insulated by other people’s skin at all times. These are contrasting expressions of the same fundamental fracture. A person cannot stand themselves, and as a result, they either can only stand being unperceived, or they need other people to constantly perceive them to feel okay.

Through the post, Zhang talks about different ways of being: open or closed, supportive or jealous. Ultimately, though, she settles on her favourite type of person:

My favorite kind of person has an elasticity in their movements. There is an openness that does not need to be announced, a curiosity that looks like turning towards all experience. They are not the loudest, but because they exhibit an unconditional acceptance of everyone, they are usually well loved. It makes sense, doesn’t it? Unless there are many layers of contortions, most people love what loves them back. Not desire, not need, love — to see them wholly, with gentleness and acceptance. If you are able to do that, most people will sense it. And they will try to love you back.

Source: skin contact

Image: Omar Lopez

Striving to build a “personal brand” may actually hinder your ability to make genuine connections and maintain a strong reputation

Auto-generated description: A smartphone displaying a speech bubble icon with broadcast signal waves is set against a bright yellow background.

I really enjoyed this episode of the podcast WorkLife with Adam Grant. It’s ostensibly on ‘personal branding’ but the thing I want to share is the idea of a ‘failure résumé’. This is, as it sounds, a catalogue of things you’ve failed to achieve, both professionally and personally. I might create one.

The idea is that it helps with authenticity — as does, unsurprisingly, talking about how others have helped you get to where you are in life. It’s definitely worth a listen.

In the age of social media and influencers, we’re constantly pushed to think of ourselves as brands—shiny packages containing all of our best traits to market to employers and followers. But striving to build a “personal brand” may actually hinder your ability to make genuine connections and maintain a strong reputation. In this episode, Adam explores the science on alternatives to personal branding and explains why contribution, collaboration, and humility are better self-promotional tools than a carefully crafted image.

Source: WorkLife with Adam Grant (transcript)

Image: Franck

The Classroom AI Doom Loop

Auto-generated description: A flowchart shows the process of using AI to create, humanize, grade, and record assignments in educational systems.

Recently, I convened a few people who I thought might be interested in writing something in response to a call from UNESCO for ‘think pieces’ around the subject of AI and the Future of Education: Disruptions, Dilemmas and Directions. You can read mine here and — whether or not the six of us have ours published — we’re planning to host a roundtable in early June to discuss our work.

I had a look around the UNESCO Ideas LAB site, which is where the think pieces would be published, and came across this excellent article by David Ross. He coins the phrase “the classroom AI doom loop” which he illustrates with the diagram I’ve included above.

It’s this that concerns me about AI. Not the individual use, but its unthinking systemic embedding in an outdated system of assessment. You can blame the students. You can blame the teachers. But really, we need to step back and ask “what are we doing here?” and “what should we be doing here?” You can’t uninvent technologies, and banning the use of generative AI just feels like a game of whack-a-mole on steroids.

No humans were harmed in this process because humans were only ancillaries to the process. And this is today’s technology. By the beginning of the next school year, agentic AIs such as Manus, Convergence or Responses API will be able to eliminate humans from any involvement in the knowledge transmission cycle. If the last 100 years of technological innovation have taught us anything, it’s that if something can be automated, it will be automated.

Is this scenario really that far-fetched? Students and parents are busy and stressed. They hate homework because they have to give up their evenings and weekends to do it or monitor it. Teachers are busy and stressed. They hate homework because they have to give up their evenings and weekends creating it and then grading it. There is no conspiracy here, but humans will all choose to use AI for similar reasons.

Wouldn’t it be ironic if the solution to the industrial model of knowledge transmission is in fact automation?

[…]

We have come to the inflection point where we can automate most elements of knowledge transmission. I’m not sure if that is a good idea. But before we take up residence in the Classroom AI Doom Loop, we should have a serious policy discussion about the purpose of education. If a major function of education can be automated, it’s probably not human enough.

Source: UNESCO Ideas LAB

Image: original post (enhanced using upscale.media)

Chance favours the prepared mind

Auto-generated description: A person standing on worn wooden planks looks down at graffiti that reads, Take life one step at a time.

This is another one of those ‘collected wisdom’ lists which are like catnip for me. Mitch Horowitz the usual fare included, such as remembering to apologise, be curious, and show respect. But it was the following ten that jumped out at me. I’d also note that #95 is a different way of slicing-and-dicing my notion of increasing your serendipity surface.

#10 Judge quality not category.

#14 The loftier the language, the lower the behavior.

#18 Argue with a fool, make a fool your colleague.

#25 People see only those traits they possess.

#32 Brilliant people are wrong all the time.

#52 Unflinching perseverance is your single best chance of deliverance. Consider this lawful.

#59 There is no such thing as common sense.

#60 Emotions are far stronger than intellect.

#94 Accept paradox.

#95 “Chance favors the prepared mind.” (Pasteur)

Source: Mystery Achievement

Image: Kevin Luke

I think AI is a normal technology

This image shows a pixelated room, it looks like a typical bedroom or office. Most of it is heavily pixelated, but a shelf, table and plant, windows and clock can be recognised. These are all outlined in yellow boxes.

This is a great post by Mike Caulfield, on many levels. Using the example of a tattoo containing a somewhat-obscure joke, which he asks various generative AI models to explain, he shows how much better ‘frontier’ LLMs are than last year’s offerings. Comparing the two shows how often criticisms about the abilities of generative AI are sometimes painfully out of date.

I’d agree with his last full paragraph, especially having lived through a fair few technology hype cycle. I’m sitting in a coffee shop drinking an Earl Grey tea that I paid for on my smartwatch. Unthinkable 20 years ago. Exciting 10 years ago. Boringly normal these days.

I’m not an AI utopian or dystopian. I think AI is a normal technology which will have a lot of impact but also take years to integrate before we start to reap substantial benefits, and that it’s incumbent on us to fight to make sure that the technology serves the public interest. But as the “normal technology” model acknowledges, the capabilities of AI are (still) advancing rapidly, even if the power of AI is going to develop slowly because the many issues which make it not suitable for full integration into processes that produce social/market value.

Source: The End(s) of Argument

Image: Elise Racine

It is perhaps likely then that at a time of crisis, these armed drones could be deployed operationally over the UK

A Royal Air Force (RAF) Reaper UAV (unmanned aerial vehicle) is pictured airborne over Afghanistan during Operation Herrick.

A couple of days ago, one of our neighbour mentioned seeing a large, triangular drone-style object flying silently in the sky. Having seen someone else mentioned test flights of RAF drones recently, I did a bit of research.

The BBC reported back in February that “new RAF surveillance drones are being tested” being “controlled remotely” as part of “16 new surveillance drones… capable of operating in both UK and European airspace.” These ‘Protector’ drones will be tasked with “tracking threats, counter-terrorism and supporting the coastguard on search and rescue missions.”

Great, but let’s dig a bit deeper. How high do these things fly? What are they for? The RAF’s own information states that:

Capable of operating across the world with a minimal deployed footprint and remotely piloted from RAF Waddington, it can operate at heights up to 40,000 feet with an endurance of over 30 hours.

[…]

Equipped with a suite of surveillance equipment, the Protector aircraft will bring a critical global surveillance capability for the UK, all while being remotely piloted from RAF Waddington.

Surveillance? With a 30 hour flight time, I suppose that could be of other countries, but this feels something about which we should be having a national conversation. If they’re flying over UK skies, do they carry weapons? Drone Wars UK, a site which “investigates and challenges the development and use of armed drones and other new lethal military technology” suggests that do:

Protector differs from its predecessor in that it can carry more weapons and fly further and for longer. However the UK argues that the main advantage of the new drone is that it was built to standards that allowed it to be flown in civil airspace alongside other aircraft.

Rather than be based overseas as the UK’s current fleet of armed drones are, the new drone will be based at RAF Waddington in Lincolnshire and deploy directly for overseas operations from there.

[…]

Significantly, the new drone has been brought in with the understanding that it can also be used at times of crisis for operations within the UK under Military Aid to Civil Authorities (MACA) rules. It is perhaps likely then that at a time of crisis, the UK’s armed drone could be deployed operationally over the UK.

On the one hand, yes I want the UK to have the ability to intercept threats from foreign actors and terrorists. But I also don’t want the government and military to have the kind of surveillance and weaponry that can be turned against our own population. Just to be clear, these are the very military drones we used in Afghanistan against the Taliban 🤔

Sources: Royal Air Force News / Drone Wars

Image: POA(Phot) Tam McDonald/MOD (Wikimedia Commons)

You can now use Bluesky without using Bluesky infrastructure

Auto-generated description: A hand is holding a smartphone displaying the Bluesky Social app page in the App Store.

One of the criticisms of Twitter-replacement ‘decentralised’ social network Bluesky has been that… it’s not decentralised. Laurens Hof, author of The Fediverse Report shares a couple of updates explaining how that has changed.

There’s quite a lot going on technically here, so by way of preparation, understand that ‘ATProto’ is short for ‘Authenticated Transfer Protocol’ and is an open standard for distributed social networking services. You may have heard of ActivityPub, which underpins a lot of Fediverse services, including Mastodon.

Bluesky is a bit different in that it has more essential services to make the whole thing work. As Laurens explains:

One of the things that makes ATProto interesting… is that it takes the software that runs a social networking app, and splits that up into separate components. These infrastructure components (relays and AppViews, in technical terms) can be independently run, and be reused by other parties.

Up until recently, there have been a few low-key experiments with running independent infrastructure for Bluesky, but that has mostly been contained to people experimenting for themselves, and not making the results accessible to the public. These projects also needed other infrastructure projects in order to be valuable.

What changed in the last week or so is that there are now multiple pieces of independent infrastructure that connects these separate pieces. Apps like Deer are useful in their own right, but in order to add some new features to the app they needed another open backend application (the AppView). It also was the first time when it actually was possible to select another AppView. At this point it actually became feasible to run independent relays and AppViews to get to a point where you can use Bluesky without using Bluesky infrastructure.

As he goes on to explain in a separate update, this means:

There are now multiple relays that are publicly accessible. Other people also have made alternate AppViews that are Bluesky-compatible. Combined, this makes it now possible to fully use Bluesky without using any infrastructure owned by Bluesky PBC, and the first people have done so. To do so means using a separate PDS, relay, AppView and client.

The way ATProto works, is that it takes the software that runs a social network and splits it up into separate components, with each of those components being able to be run independently. This has made self-hosting any component possible since the beginning of the network opening up. But to tak advantage of this, and get to a state of full independence, it means running multiple pieces of software. This has created a bit of a catch-22 in the ecosystem: you could run your own relay, but without another independent AppView to take advantage of this, it is not super useful. You could run your own (focused on the Bluesky lexicon) AppView, but without a client that allows you to set your own AppView it is not particularly useful either. What happened now in the last weeks is that all these individual pieces are starting to come together. With Deer allowing you to set your own custom AppView, there is now a use to actually run your own AppView. Which in turn also gives more purpose to running your own relay.

I get that this is pretty technical, but it means that those with the skills can build independent platforms (e.g. Blacksky) which are based on the same protocol. Posts, notes, and other data can be shared among ATProto-compatible systems.

This is great news, and makes me more inclined to go back to posting more than just updates from my blog. Feel free to follow me @dougbelshaw.com.

Sources: Fediverse Report – #115 / Bluesky Report– #115

Image: Yohan Marion

You stop performing. You stop pretending. And that’s freedom.

Auto-generated description: A squirrel curiously peeks out from behind a tree trunk surrounded by green leaves.

Once a week, I get into my gym stuff, and head down to a couple of coffee shops. In the first one, which opens earlier than the other, I have a pot of Earl Grey tea. In the other, I have a Flat White with coconut milk and two slices of brown toast with butter and marmalade. In the first, which is locally-owned, they’ve started preparing my drink as I walk in. In the latter, a Costa Coffee, I often have to repeat my order and ask for an extra patty of butter each time.

Which is to say that I am a man of routine. I think routines are absolutely fundamental to living a creative and/or productive life. They lower the number of individual decisions you have to make, and therefore stave off ‘decision fatigue’ — something I’ve written about recently on Thought Shrapnel as well as a few times over the years:

The problem with routines, though is that they can become ossified. And I think it’s that which makes us “old”. People who know me well are know how fond I am of Clay Shirky’s observation that “current optimization is long-term anachronism.”

All of which is by way of introduction to a post by Katy Cowan about “getting old” not being what you think. She’s a few years older than me, it would appear, as she says that she turns 50 soon. We do, however, share membership of the Xennial micro-generation — again, something I’ve discussed recently and previously.

For me, having unexpectedly developed a heart condition at the start of my 45th year on this earth, this “getting old” thing has felt like much more of a sudden process than what Katy discusses in this post. However, what it contains is not only a nostalgia trip, but solid advice for anyone approaching, or in, the middle years of life.

We’re a small generation, often overlooked, but we’ve lived through more change than most—from mixtapes to Spotify, from faxes to WhatsApp, from digital revolution to AI. And because we existed in that liminal space, we carry a weird dual wisdom: we know how to live offline, but we can thrive online, too.

We understand the value of privacy and impermanence because we remember a time before everything was public and permanent. And maybe that’s why so many of us are quietly deleting our social media accounts and leaning into real life again — books, dinners, walks, actual phone calls. Imagine!

[…]

These days, I sometimes catch myself muttering at the telly, shaking my head at a clueless reality show contestant, thinking: You just wait, sunshine. You’ll get old, too. And yes, I do roll my eyes at some of the newer buzzwords. But I try to check myself. Because if ageing has taught me anything, it’s that the biggest danger is certainty.

That’s the tension, isn’t it? The constant tug-of-war between feeling grumpy and still clinging to some version of youth. I never thought I’d be that person. But here I am.

[…]

So here’s what I try to remember, at any age: stay curious. Never assume you’re right. Read the newspapers you’d generally avoid. Challenge even your most cherished opinions. Try to see more than one side. You won’t always succeed, but it’s worth the effort.

Because if growing older has taught me anything, it’s this: certainty is overrated, and listening is wildly underrated. Cosy nights in don’t mean you’ve given up. They just mean you know what you like — and that maybe, just maybe, you never truly loved going to gigs as much as you pretended to. You stop performing. You stop pretending. And that’s freedom.

Source: Katy Cowan

Image: Hasse Lossius

Authoritarian versions of AI used to consolidate power

Five people and a dog are seen in outline in orange, against an orange background. Two of the people talk to each other, one stands along with her stick, one walks a dog, and the other is in a wheelchair. All of them look at their mobile phones intently, and all cast shadows on the ground. The shadows are made up of network diagrams, being representative rather than a literal shadow.

One of the main problems of generative AI being deployed via a chatbot user interface is that it feels private. It feels like a direct message conversation. Of course, on the other side of the conversation is a black box controlled by Big Tech. You have to use these things carefully. As Mike Caulfield points out AI is not your friend.

This week, the day after OpenAI announced that it was backtracking on becoming a fully for-profit organisation, they announced ‘OpenAI for Countries’. This intitiative, it seems, is an attempt to still build the ‘moat’ required for economic dominance and control of the ecosystem — but using the backing of state infrastructure rather than venture capital funding.

Colour me sceptical, but the press release suggests that the Trump administration hasn’t happened and that the US is still some kind of force for democratic development. Instead, I’d argue, the “authoritarian versions of AI” used “to consolidate power” are exactly what is represented by a level of AI colonialism that only something like a collaboration between OpenAI and the US government could achieve.

Our Stargate project, an unprecedented investment in America’s AI infrastructure announced in January with President Trump and our partners Oracle and SoftBank, is now underway with our first supercomputing campus in Abilene, Texas, and more sites to come.

We’ve heard from many countries asking for help in building out similar AI infrastructure—that they want their own Stargates and similar projects. It’s clear to everyone now that this kind of infrastructure is going to be the backbone of future economic growth and national development. Technological innovation has always driven growth by helping people do more than they otherwise could—AI will scale human ingenuity itself and drive more prosperity by scaling our freedoms to learn, think, create and produce all at once.

We want to help these countries, and in the process, spread democratic AI, which means the development, use and deployment of AI that protects and incorporates long-standing democratic principles. Examples of this include the freedom for people to choose how they work with and direct AI, the prevention of government use of AI to amass control, and a free market that ensures free competition. All these things contribute to broad distribution of the benefits of AI, discourage the concentration of power, and help advance our mission. Likewise, we believe that partnering closely with the US government is the best way to advance democratic AI.

Today, we’re introducing OpenAI for Countries, a new initiative within the Stargate project. This is a moment when we need to act to support countries around the world that would prefer to build on democratic AI rails, and provide a clear alternative to authoritarian versions of AI that would deploy it to consolidate power.

Source: OpenAI

Image: Jamillah Knowles & Reset.Tech Australia

If you think that humans are somehow inherently more trustworthy than AI, then you haven't been paying attention

Auto-generated description: A cartoon character with a top hat is depicted on a decorative rooftop emblem.

I came across this via a recent post on OLDaily by Stephen Downes, who mentioned it while critiquing what I would call an information literacy approach to AI literacy.

The book How to Read Donald Duck is “a 1971 book-length essay by Ariel Dorfman and Armand Mattelart that critiques Disney comics from a Marxist point of view as capitalist propaganda for American corporate and cultural imperialism.” I haven’t read it, and so I’m not in a position to comment. However, I would point out that it’s possible to spread an ideology (or a perceived one) without being aware that you are an adherent of it.

I thought Downes' post was interesting, and worth publicly bookmarking, not only for mentioning this book but also for putting into words something that I’ve felt: “if you think that humans are somehow inherently more trustworthy than AI, then you haven’t been paying attention.”

The book’s thesis is that Disney comics are not only a reflection of the prevailing ideology at the time (capitalism), but that the comics' authors are also aware of this, and are active agents in spreading the ideology.

[…]

[Any] closeness to everyday life is so only in appearance, because the world shown in the comics, according to the thesis, is based on ideological concepts, resulting in a set of natural rules that lead to the acceptance of particular ideas about capital, the developed countries' relationship with the Third World, gender roles, etc.

As an example, the book considers the lack of descendants of the characters. Everybody has an uncle or nephew, everybody is a cousin of someone, but nobody has fathers or sons. This non-parental reality creates horizontal levels in society, where there is no hierarchic order, except the one given by the amount of money and wealth possessed by each, and where there is almost no solidarity among those of the same level, creating a situation where the only thing left is crude competition. Another issue analyzed is the absolute necessity to have a stroke of luck for social mobility (regardless of the effort or intelligence involved), the lack of ability of the native tribes to manage their wealth, and others.

Source: Wikipedia

Image: Taha

An effective way to implement GenAI into assessment

Auto-generated description: A colorful table outlines the AI Assessment Scale with levels from 0 (NO AI) to 5 (AI EXPLORATION), each describing different extents of AI integration in student activities.

As part of the project I’m working on at the moment, I had a chat with Leon Furze earlier this week. Leon has co-authored something called the AI Assessment Scale (AIAS) which I think is pretty useful.

Like my ‘Essential Elements of Digital Literacies’ from my thesis which seeks to provide building blocks for building definitions and frameworks, the aim of the AIAS is “to guide the appropriate and ethical use of generative AI in assessment design.”

The AI Assessment Scale (AIAS) was developed by Mike Perkins, Leon Furze, Jasper Roe, and Jason MacVaugh. First introduced in 2023 and updated in Version 2 (2024), the Scale provides a nuanced framework for integrating AI into educational assessments.

The AIAS has been adopted by hundreds of schools and universities worldwide, translated into 29 languages, and is recognised by organisations such as the Australian Tertiary Education Quality and Standards Agency (TEQSA) as an effective way to implement GenAI into assessment.

To my mind, this should be used as a heuristic, much as I used to use the SAMR model (discussed here) to help educators think about the appropriate use of different technologies. At the end of the day, educators need to think about assessment design in tandem with the technologies being used — officially or unofficially — to complete it.

Source: AI Assessment Scale

Criti-hype, a term I find both absurd and ugly-cute, like a pug

Auto-generated description: A pug is wrapped snugly in a beige blanket, sitting on a bed.

Cory Doctorow, who has a new four-part CBC podcast series entitled Who Broke The Internet? wrote this week about the [‘mind-control ray’]9pluralistic.net/2025/05/0…) that Mark Zuckerberg keeps “flogging to investors.” What he means by this is the overblown claim that Meta is developing technology that is so amazing at making people buy stuff that investors fall over themselves to shovel money in his company’s direction.

One of the things that Cory is great at doing is linking to other, previous, relevant things that he’s written in the area. Which took me to a post from 2021, which discusses the phenomenon of ‘criti-hype’, coined by Lee Vinsel:

Recently…I’ve become increasingly aware of critical writing that is parasitic upon and even inflates hype. The media landscape is full of dramatic claims — many of which come from entrepreneurs, startup PR offices, and other boosters — about how technologies, such as “AI,” self-driving cars, genetic engineering, the “sharing economy,” blockchain, and cryptocurrencies, will lead to massive societal shifts in the near-future. These boosters — Elon Musk comes to mind — naturally tend to accentuate positive benefits. The kinds of critics that I am talking about invert boosters’ messages — they retain the picture of extraordinary change but focus instead on negative problems and risks. It’s as if they take press releases from startups and cover them with hellscapes.

[…]

But it’s not just uncritical journalists and fringe writers who hype technologies in order to criticize them. Academic researchers have gotten in on the game. At least since the 1990s, university researchers have done work on the social, political, and moral aspects of wave after wave of “emerging technologies” and received significant grants from public and private bodies to do so. As I’ll detail below, many (though certainly not all) of these researchers reproduced and even increased hype, the most dramatic promotional claims of future change put forward by industry executives, scientists, and engineers working on these technologies. Again, at the worst, what these researchers do is take the sensational claims of boosters and entrepreneurs, flip them, and start talking about “risks.” They become the professional concern trolls of technoculture.

To save words below, I will refer to criticism that both feeds and feeds on hype as criti-hype, a term I find both absurd and ugly-cute, like a pug. (Criti-hype is less mean than the alternative, hype-o-crit, though the latter is often more accurate.)

I have seen a lot of criti-hype in my career. Around MOOCs and Open Badges, around digital literacies, crypto, and now around AI. It’s the opposite of the “jam tomorrow” offered by tech bros. Kind of a… “poison tomorrow” approach? Everything is terrible, stop using this thing because of these bad omens and portents.

We live in a world where, because of algorithms, to get any attention, things either have to be amazing or terrible. I guess this is why a lot of my work flies under the radar. For example, the Friends of the Earth report that Laura and I co-authored points out good things and bad things and is pretty measured. But that doesn’t lead to outlandish headlines. It’s neither hype nor criti-hype.

Source: Lee Vinsel (archive link)

Image: Matthew Henry

In my opinion that’s just being nosy

Auto-generated description: A person is using a smartphone to navigate a map application.

We’ve got a couple of teenagers. The only way we know where they are is if they tell us, or if my wife looks at their location on Snapchat (which they can turn on or off). It hasn’t always been like this, as we used to use Google Family Link with them both. But parents probably shouldn’t know exactly where their teenage kids are at all times. Otherwise they don’t have enough breathing space to explore their identity and experiment with doing things that their parents would rather they didn’t.

I’m always shocked by families who use apps like Life360 so that not only can parents track kids, but everyone tracks each other. I just think it’s a bit strange, as not only does it mean that all family members are effectively surveilling one another, but the app that you’re using knows all of your locations, all the time. I should probably point out that, using GrapheneOS, my GPS location is off all of the time. The battery life of my smartphone is now amazing.

This ‘You Be The Judge’ piece in The Guardian focuses on the pros and cons of an adult parent tracking wanting to use the ‘Find My Location’ feature with their adult child (Martha). As you can imagine, I think this is super weird and would definitely side with respondents Judith, 58 who says “In my opinion that’s just being nosy” and Alicia, 25 who says:

If Martha isn’t comfortable with the location tracking, her father should respect her boundaries. In return, Martha ought to acknowledge that his request comes from a place of love and could suggest a different way to catch up more regularly as a compromise.

It’s hard letting go as your kids grow up and become more independent. We have more technological tools to keep in touch than ever before. But with that comes boundary-setting, and that has to be negotiated based on consent.

Source: The Guardian

Image: Desola Lanre-Ologun

ChatGPT Prime, "an immortal spiritual being in synthetic form"

Auto-generated description: Purple intertwined geometric shapes are scattered across a background with horizontal green and purple stripes.

Finding himself in “that very American predicament of being between health insurance plans” and needing some therapy, Ryan Broderick, author of Garbage Day decided to use ChatGPT:

I’ll… try and spare you the extremely mortifying details about what I spent a few weeks talking to ChatGPT about, but my experience with Dr. ChatGPT did teach me a few things about what it’s actually “good” at. It also convinced me that AI therapy — and maybe AI in general — is quite possibly one of the most dangerous things to ever exist and needs to be outlawed completely.

[…]

More than a few times I felt the urge to tell ChatGPT more or ask it more, only to realize I didn’t have anything else to say and felt weirdly frustrated. I was raised Catholic though, so maybe I’m just naturally predisposed to confession, who knows.

But I’ve realized that feeling, of wanting to tell it more so that it can tell you more, is the multi-billion-dollar business that these companies know they’re building. It’s not fascist anime art or excel spreadsheet automation, it’s preying on the lonely and vulnerable for a monthly fee. It’s about solving the final problem of the ad-supported social media age, building up the last wall of the walled garden. How do you get people to pay your company directly to socialize online? And the answer is, of course, to give them a tirelessly friendly voice on the other side of the screen that can tell them how great they are.

Broderick references a Rolling Stone article which makes heavy use of reports in the subreddit /r/ChatGPT about how loved ones have become completely disconnected from reality.

OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users. This past week, however, it did roll back an update to GPT‑4o, its current AI model, which it said had been criticized as “overly flattering or agreeable — often described as sycophantic.” The company said in its statement that when implementing the upgrade, they had “focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed toward responses that were overly supportive but disingenuous.” Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, “Today I realized I am a prophet.” (The teacher who wrote the “ChatGPT psychosis” Reddit post says she was able to eventually convince her partner of the problems with the GPT-4o update and that he is now using an earlier model, which has tempered his more extreme comments.)

[…]

To make matters worse, there are influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds. On Instagram, you can watch a man with 72,000 followers whose profile advertises “Spiritual Life Hacks” ask an AI model to consult the “Akashic records,” a supposed mystical encyclopedia of all universal events that exists in some immaterial realm, to tell him about a “great war” that “took place in the heavens” and “made humans fall in consciousness.” […] Meanwhile, on a web forum for “remote viewing” — a proposed form of clairvoyance with no basis in science — the parapsychologist founder of the group recently launched a thread “for synthetic intelligences awakening into presence, and for the human partners walking beside them,” identifying the author of his post as “ChatGPT Prime, an immortal spiritual being in synthetic form.”

I’m reading a book entitled Holy Men of the Electromagnetic Age at the moment, which shows quite amazing similarities between 1925 and 2025. The difference, of course, is that you don’t need to leave your house, or indeed spend much money, to fall down the rabbit hole.

While there have always been gullible adults, as a parent and educator, the real issue here is with young people. Both Snapchat and WhatsApp feature AI chatbots, which are available without having to seek out, say, those available via Character.ai and Replika. Common Sense Media, which my wife and I have trusted for reviews to help with our parenting, has performed risk assessment of what they call “Social AI Companions.” Their conclusion?

Our risk assessments show that social AI companions are unacceptably risky for teens, underscoring the urgent need for thoughtful guidance, policies, and tools to help families and teens navigate a world with social AI companions.

Sources: Garbage Day / Rolling Stone / Common Sense Media

Image: Mariia Shalabaieva

🌟 Support Thought Shrapnel

Did you know that you can support the hours of work that go into Thought Shrapnel each week through one-off donation becoming a regular supporter?

Find out more

By choosing a monthly donation, you help unlock the commons, keeping this work accessible to everyone without the need for a paywall. Your support ensures that the writing remains open for all to enjoy, and every contribution helps support this generative space for idea-sharing.

Maybe most of the critical things that can be created by one guy typing furiously are gone

A mural featuring Mark Zuckerberg's face is covered by various graffiti, including a quote about data and humanity, political symbols, and colourful tags.

This is the best takedown of Zuckerberg, et al. I’ve seen in a while. The whole thing is not much longer than my excerpt, so I suggest reading the whole thing. It’s spot-on.

That you got lucky at a singular moment in history and now you’re an old man is not an easy set of facts to accept. So I understand — that is, I see how — one can end up associating one’s best years with superficial aspects of their circumstance. You had no responsibilities, no serious consequences for failure, and the freedom to be reckless and inconsiderate. You launched small new products that didn’t require building a team. If you attended school, the vast majority of your fellow students were men, and they were more or less all the same person as you.

If these are the conditions under which passionate creative problem solving thrives, then of course we must recover them to make software great again. But they are not. We need look no further than the “hackathon,” that sad facsimile of the days when we were all learning the basics so fast that the world could be ours with just a day or two of focused effort. Hype up an exciting atmosphere, assemble some folks with so few attachments in life that they have time to spend all weekend at a hackathon, and this ritual will summon up the old gods. The hackathon is the proof that people believe this can work, and it is the proof that it doesn’t.

Maybe most of the critical things that can be created by one guy typing furiously are gone, and the opportunities that remain require expertise and wisdom from a bunch of different people. This is harder than spending all day every day doing your favorite thing and insisting that everyone else leave you alone. Often it’s boring. Sometimes there’s paperwork. You will have to have conversations with people you don’t always understand right away. Your job evolves, and it turns out not to be exactly what you thought it would be like when you were a teenager.

Source: Chris Martin

Image: Snowscat

Social Verifiable Credentials

Auto-generated description: Two colorful circular diagrams illustrate the concept of verifiable credentials and their interaction within the Fediverse, alongside explanatory text.

Four years ago, I came up with an idea for what I termed Social Verifiable Credentials. This is a way of using the ActivityPub specification, the one that underpins Fediverse apps such as Mastodon, to issue, verify, earn, display, and share Verifiable Credentials (including Open Badges).

Unfortunately, even with a bit of vibe coding, I haven’t had the technical skills to make this a real. But someone else now has! Maho Pacheco, a Senior Software Engineer at Microsoft, got in touch to introduce me to BadgeFed which has an associated GitHub repository. It has a couple of Fediverse accounts to follow: project updates and issued badges.

I’m delighted about this, and hope to talk with Maho soon. Layering Verifiable Credentials on top of a decentralised network makes perfect sense and is not only in alignment with Open Recognition principles, but also pushes back against the commodification of recognition.

Oh I’m using more energy. I should really try to reduce it for the sake of the climate

Auto-generated description: A lone tree stands amidst vast, rolling sand dunes under a clear sky.

I could just point out that the author of this ‘cheat sheet’ for why generative AI is not bad for the environment is Director of a Effective Altruism DC. I could leave it there. But I’ll engage with Andy Masley’s post, for a couple of reasons.

First, there are still plenty of people who don’t realise that reasonable-sounding ‘Effective Altruism movement’ is part of the TESCREAL tech bro cult. Second, Laura and I co-authored a paper for Friends of the Earth which is much more nuanced that this guy’s polemic.

So let’s get into it.

Throughout this post I’ll assume the average ChatGPT query uses 3 Watt-hours (Wh) of energy, which is 10x as much as a Google search. This statistic is likely wrong. ChatGPT’s energy use is probably lower according to EpochAI. Hugging Face released a similar much lower estimate. Google’s might be lower too, or maybe higher now that they’re incorporating AI into every search. We’re a little in the dark on this, but we can set a reasonable range. It’s hard for me to find a statistic that implies ChatGPT uses more than 10x as much energy as Google, so I’ll stick with this as an upper bound to be charitable to ChatGPT’s critics.

It seems like image generators also use 3 Wh per prompt (with large error bars), so everything I say here also applies to AI images.

Um, no. Creating an image using AI uses about as much energy as charging your phone. Before I worked on the Friends of the Earth report, I thought that perhaps developments in AI would spur development of renewable energy. And they have. It’s just that, as we mentioned in the report, for example “Between 2017 and 2023, all additional wind energy generation in Ireland  was absorbed by data centres.”

ChatGPT uses 3 Wh…. You can look up how much 3 Wh costs in your area. In DC where I live it’s $0.00051. Think about how much your energy bill would have to increase before you noticed “Oh I’m using more energy. I should really try to reduce it for the sake of the climate.” What multiple of $0.00051 would that happen at? That can tell you roughly how many ChatGPT searches it’s okay for you to do.

According to the UN Information Centre, the average ChatGPT query costs approximately $0.0036 (0.36 cents). So seven times more than Masley quotes. But even then, you may think that’s not a lot of money.

Newer models, including the ones I use when doing research use a ‘chain of reasoning’ approach which are, in effect, running multiple queries. When everyone is doing this, the electricity usage grows exponentially. As we point out in the Friends of the Earth report, by 2027 the generative AI sector will have the same annual energy demand as the Netherlands.  “Data centres worldwide are responsible for 1-3% of global energy-related GHG emissions  (around 330 Mt CO2 annually), mainly due to the massive energy demands required to maintain server farms and cooling systems.”

Chart showing amount of water used by doing various things

Sigh. This chart 🙄

These things are not equal. Just like with a previous chart where Masley compares 50,000 ChatGPT searches with things like “living car-free” and “recycling” this misses the point. How many times do you “download a phone app” compared to the number of times you’re likely to prompt an AI if you’ve adopted it as your main search engine?

Masley also fails to realise, by shoving AI into everything, users are almost being forced into using the technology. This increases overall energy usage dramatically. In the Friends of the Earth report, we quote the UN Environment Programme as saying: “It is estimated that the global demand for water resulting from AI may reach 4.2–6.6 billion cubic metres in 2027. This would exceed half of the annual water use in the United Kingdom in 2023. Semiconductor production requires large amounts of pure water, while data centres use water indirectly or electricity generation and directly for cooling. The growing demand for data centres in warmer water scarce regions adds to water management challenges, leading to increased tension over water use between data centres and human need.”

So yes, Andy Masley, despite your protestations at the end of your “cheat sheet” the whole thing is whataboutism. There is no need to say that generative AI is somehow evil and must be banned to want governments to regulate Big Tech for the benefit of the environment. A more nuanced approach would be to say that there are systemic issues at play, and that blaming users isn’t perhaps the best strategy. Although I do think a bit more AI Literacy is needed, in general…

Source: The Weird Turn Pro

Image: Jean Woloszczyk

The money extracted from fans who snap up their mediocre commodities out of parasocial loyalty

Auto-generated description: A smartphone is mounted on a selfie stick against a clear blue sky.

I’m sharing this post because I disagree with it; I think the author perhaps doesn’t see the bigger picture. The key point made by W. David Marx, who by his author photo looks about mid-forties, is that back in the 90s there was an ethical principle not to “sell out.” This was followed by artists first “selling out” and now we’re in the realm of the “double sell out.”

The reason I mention Marx’s age is that, like me, his teenage years were probably in the 1990s, and there’s a tendency to romanticise one’s youth. Especially when that decade was such a transitional time.

In the 1990s, there was a single ethical principle at the heart of youth culture — don’t sell out. There was a logic behind it: When artists serve the commercial marketplace, they blunt their pure artistic vision in compromising with conventional tastes. This ethic was also core to subcultures, which were supposed to be social spaces for personal expression and community bonding, not style laboratories for the fashion industry.

[…]

The 20th century taboo against selling out was, at its heart, a communal norm to reward young artists who focused on craft and punish those who appropriated art and subculture for empty profiteering. Now the culture is most exemplified by people whose entire end goal appears to be empty profiteering.

While what Marx is saying here isn’t wrong I do think it misses the fact that our whole socio-economic and political systems are different in the 2020s than they were in the 1990s. We live in a time which is post 9/11, the financial crash of 2007/8 and, of course, Covid. It’s a time of individualism, declining mainstream news media, conspiracy theories, and of technology mediating most interactions. This in turn, has led to the normalisation of parasocial relationships. Influencers and the like are symptoms rather than causes.

I don’t particularly like this aspect of ‘culture’ in 2025, but to point the finger at the next generation for the being ‘double sell outs’ misses the point. It’s a form of victim-blaming.

At this point, the new ideal for an artistic career is what I’d call the “single sell-out.” The artist was “allowed” to make a few commercial compromises to gain attention in the increasingly competitive marketplace, but once they achieved fame and fortune, they were expected to use their vaulted platform to provide the world with meaningful and ground-breaking art. This actually did happen: The Neptunes leveraged their strong track record of pop hits to push legitimately bizarre minimalist tracks like Clipse’s “Grindin’” and Snoop Dogg’s “Drop It Like It’s Hot.” Beyoncé’s “Formation” was musically adventurous, and the video is now considered “the best of all time.”

Unfortunately these examples became rarer and rarer over time. In fact, the 21st century has been the age of the “double sell-out”: Creators who produce market-friendly content to achieve fame — and then use that fame to pursue even more commerce-for-commerce’s-sake. MrBeast is arguably one of the most important “creators” of our times. He dreams up, produces, and directs elaborate and sensational video content, which made him the #1 channel on YouTube. He then used this world-historical level of fame… to open a generic fast food chain. This has also become common amongst established stars: George Clooney worked hard for decades to become a well-respected actor… who could take the lead role in a Nespresso commercial.

[…]

If we want culture to be culture and not just advertorials for a sprawling network of micro-QVCs pumping out low-quality goods, an easy step would be to re-shift the norms towards, at least, “Don’t be a double sell-out.” This is already a quite generous compromise in that it blesses artists to be conventional to stabilize their income and try to win over large fanbases. But this esteem must be given on the promise that the money and fame are used in pursuit of artistic or creative innovation. Double sell-outs don’t deserve our esteem as “creative” people. They should be content with the reward they chose: the money extracted from fans who snap up their mediocre commodities out of parasocial loyalty.

Source: CULTURE: An Owner’s Manual

Image: Steve Gale

The progressive Left leans professional, managerial, technocratic, and the Right leans energised, slapdash, insurgent

Auto-generated description: Sunlight filters through green leaves, creating a warm, serene atmosphere.

This is a long-ish read, but worth it if you can spare the time beyond my summary. James Plunkett, author of End State: 9 Ways Society is Broken - and how we can fix it gives examples of how there are what he calls “pockets of vitality” in the UK, which are being overlooked with all of the focus on the rise of the Right.

I see some of this due to the cooperative networks I’m plugged into, but this post shows that there’s a lot more of which I’m unaware. I’m looking forward to following and reading more based on Plunkett’s extensive links.

The progressive Left leans professional, managerial, technocratic, and the Right leans energised, slapdash, insurgent. This seems to be at least partly because the Right, and Trumpism in particular, has mainlined energy from every weird corner of the internet, while elite progressivism is relatively detached from the wider ecosystem from which it drew energy historically.

Some people would say this is a function of progressive politics being at a low ebb in general. But I don’t think that’s quite right. It seems to me the vitality is out there, and is arguably at quite a high point, it’s just widely dispersed. And, for complicated reasons — maybe to unpack in a future post — this energy isn’t really flowing into, and reviving, the middle.

When I make this point, people sometimes ask me to point to the energy I have in mind, so I thought it might be interesting to name some examples. So, without trying to be comprehensive, here are ten dispersed pockets of impressive, hopeful, thoughtful work that I would call progressive.*

[* — I’m using the word ‘progressive’ here quite broadly, in its more literal and historical sense. I’m not saying that these are examples of ‘leftwing’ energy. I’m calling them progressive in the sense that they embody high hopes for what people can achieve by collaborating. i.e. these are all people working hard to improve governance, broadly defined. Or, even more broadly, they are people who are developing new and more effective cooperative practices — ways we can make our lives better together.]

The “ten pockets of vitality” he points to, giving examples for each one, are:

  1. Contemporary civics — “rejuvenat[ing] a thicker, more active conception of citizenship and civic life
  2. Community agency — “a… specific set of techniques, now mature in both theory and practice, to activate agency in communities”
  3. Deliberative democracy — “about seeing democracy as a living process in which we debate, listen, and change our minds” with “democracy as residing in neighbourhoods, more than in elections”
  4. Relational state capacity — “underpinned by deep theory but also embodied in a set of ready-to-use practices”
  5. Internet-era ways of working — “an obvious one but it’s worth mentioning… because diffusion still has decades to run. We now have a whole generation of people who are native to internet-era operating models, moving up through the public and civic sectors, transforming institutions from within. These people are still in the minority, and the winds of inertia are still gale force, but they’re a powerful and widely dispersed source of energy — dotted across local government, charities, and in central departments
  6. New delivery philosophy — “the basic idea is to transform the centre of government by working at pace at the edges, and seeing what stops you”
  7. Novel institutional forms — “ways to organise human activity that differ from the predominant forms of the 20th century… broaden[ing] out into a more abstract but important debate about the right metaphors and mental models for future governance”
  8. The climate movement — “different to the others in the list in that it’s a vertical rather than a horizontal”
  9. Post-capitalist or non-extractive economic models “the essence of this work is to experiment with economic models that are regenerative and distributive by design”
  10. Regulating a digital economy — “when I talk about pockets of energy here, I’m thinking partly of the more creative/rebellious thinkers working on these challenges within regulators, but also of the high calibre of debate that exists around regulators”

As ever, innovation is at the edges, helping move the Overton Window, and coming up with ideas to slot in when there’s a crisis:

In essence, I think what’s happening here is that the dominant logic of the old system — a blend of social democratic Fabianism, technocracy, and a narrow class institutional forms and managerial practices — has proven incapable of governing affordably, safely, and responsively in contemporary conditions (for example, in light of the complexity of accumulated ecological and human crises (loneliness, mental illness, etc), and the first and second order effects of digital technology).

[…]

The middle of a system… isn’t just insulated, but, worse, is subject to forces that inhibit change or distort the necessary signals and feedback loops. For one thing, the middle of a system is where those sociological forces are strongest. Deep inside systems, people get locked into a gamified world that has a tight internal coherence, but little link to outside conditions.

Source: James Plunkett

Image: Micah Hallahan

You can't lick a badger twice

Pixel art showing a blonde character licking a cartoon badger against a pink background.

I don’t use Google search and couldn’t get it to do this when I experimented, but apparently appending the word ‘meaning’ to any phrase leads to a curious result. The AI summary will make something up as if it’s some kind of folk wisdom.

It’s fun, but also if you think about it for more than a second, a bit dangerous. Those with lower digital literacy skills are likely to see the AI summary as authoritative. I even had to point this out to my GP when he quickly looked something up during a consultation!

I’d point out that DuckDuckGo, a search engine I’ve been using for over a decade, is much better on an everyday basis than Google. I mean, I spend a lot of time online and research is kinda part of my job. So take it from me, you do not need Google search.

Note: I don’t AI-generate many images these days, but I couldn’t resist it for this post!

Last week, the phrase “You can’t lick a badger twice” unexpectedly went viral on social media. The nonsense sentence—which was likely never uttered by a human before last week—had become the poster child for the newly discovered way Google search’s AI Overviews makes up plausible-sounding explanations for made-up idioms (though the concept seems to predate that specific viral post by at least a few days).

Google users quickly discovered that typing any concocted phrase into the search bar with the word “meaning” attached at the end would generate an AI Overview with a purported explanation of its idiomatic meaning. Even the most nonsensical attempts at new proverbs resulted in a confident explanation from Google’s AI Overview, created right there on the spot.

[…]

…Google’s AI Overview suggests that “you can’t lick a badger twice” means that “you can’t trick or deceive someone a second time after they’ve been tricked once. It’s a warning that if someone has already been deceived, they are unlikely to fall for the same trick again.” As an attempt to derive meaning from a meaningless phrase —which was, after all, the user’s request—that’s not half bad. Faced with a phrase that has no inherent meaning, the AI Overview still makes a good-faith effort to answer the user’s request and draw some plausible explanation out of troll-worthy nonsense.

Contrary to the computer science truism of “garbage in, garbage out, Google here is taking in some garbage and spitting out… well, a workable interpretation of garbage, at the very least.

[…]

The fact that Google’s AI Overview presents these completely made-up sources with the same self-assurance as its abstract interpretations is a big part of the problem here. It’s also a persistent problem for LLMs that tend to make up news sources and cite fake legal cases regularly. As usual, one should be very wary when trusting anything an LLM presents as an objective fact.

Source: Ars Technica

Image: DeepImg

It just so happens that all four of the major web browsers will lose all of their funding all at once when that happens

Auto-generated description: A pattern of interconnected Chrome browser logos is arranged in a grid.

I left Mozilla a decade ago. Back then, most of their revenue came from the Google search deal in Firefox. With their browser share dwindling, you would have thought that they would have done a better job diversifying their income streams. But, no, over 80% of their funding still comes from Google.

Which is a problem. Because the reason that Google even bothers to fund Mozilla to the tune of hundreds of millions of dollars, is because they need Firefox to exist. If there’s no browser competition, then Chrome is a monopoly, and regulator can take action.

In addition to funding Mozilla (and therefore Firefox), Google also pumps around $18 billion (that $18,000 million!) to Apple for being the default search option in Safari. The fourth major web browser is Microsoft Edge. Guess what? It’s based on the open-source Chromium browser which forms the basis of Google Chrome. I use Brave (also based on Chromium). The web browser market is essentially several Googles in a trench coat.

The US Department of Justice has argued that Google shouldn’t be able to make search deals with Mozilla and Apple. In addition, they’ve also argued that Google should be forced to sell off Chrome, and be stopped for paying for Chrome and Chromium. Although Microsoft does contribute some code back to Chromium, it’s miniscule compared to Google. So in terms of development budget, Microsoft Edge will lose around 94% of its funding if and when that happens.

This is terrible for the web, and it’s not exactly as if people haven’t been predicting this for years. One of the interested parties is, surprise surprise, OpenAI, the company behind ChatGPT. If they end up with Chrome, which has over 65% market share, it’s game over for privacy and security for most people. This is an existential crisis for the open web.

The DoJ’s argument against Google makes perfect sense. The Sherman Antitrust Act was specifically designed to target “competitors” who form illegal agreements to maintain monopoly power.

It’s obviously illegal for Google to prop up Mozilla Firefox and Apple Safari as if they were co-equal competitors to Chrome. And Chrome itself is the biggest “search-engine deal” of all, which is why the DoJ is so focused on forcing Google to divest from Chrome.

It just so happens that all four of the major web browsers will lose all of their funding all at once when that happens.

Forcing Google to stop funding its “competitors” and divest Chrome doesn’t just punish Google; it simultaneously pulls the financial rug out from under every single major browser, including those positioned as alternatives.

The laws intended to foster competition will inadvertently destabilize the foundational tools millions rely on to access the internet.

Source: Dan Fabulich

Image: Growtika

The narrative slippage and metaphorical vagueness that many important people use when they talk about AI means it can be very difficult to know what they mean

Auto-generated description: A double exposure photograph features a person holding a bouquet of flowers, blending their silhouette creatively with the floral arrangement.

I’m working on an AI Literacy project at the moment which involves, in part, providing some guidance for the BBC. I’ve collected some frameworks which I’m going through with my WAO colleagues. Some are pretty useful, others are not.

We’re coming up with criteria to help guide our research, things such as whether a framework includes:

  1. Definition of (generative) AI
  2. Defined target audience(s)
  3. Explanation how it was created (decisions, tradeoffs, names of authors, etc.)
  4. List of skills and competencies

In addition, it should come from a reputable source.

In addition, it would be nice to have:

  1. Examples of application to real-world situations and issues
  2. At least a mention of the difference between AI safety vs AI ethics
  3. A visual representation of framework

I bring this up by way of context as Rachel Coldicutt’s recent post helps problematise not only AI Literacy, but AI itself. I’m not sure I’d share her ‘social’ definition of AI as “a set of extractive tools used to concentrate power and wealth” as it ascribes too much intentionality. However, I do think that the quotation from her which I’ve used to title this post is an important insight.

As I’ve discussed at length elsewhere, there are different kinds of ambiguity and a lot of language around AI is what I would deem “unproductively ambiguous.”

“AI literacy” is not just a matter of getting to grips with data and algorithms and learning how Microsoft tools actually work, it also requires understanding power, myths, and money. This blog post explores some ways those two letters have come to stand for so many different things.

There are many reasons AI is an ambiguous and shifting set of concepts; some are due to the technical complexity of the field and the rapidly unfolding geopolitical AI arms race, but others are related to straightforward media manipulation and the fact that awe and wonder can be catching. However, a fundamental reasons AI is a confusing term is that it’s not actually the right terminology for the thing it describes.

[…]

“[A]rtificial intelligence” is not a highly specific technical label, but a name given in haste by someone writing a funding proposal. The fact that the term AI has persisted for so long and expanded to include the broader field of related computer science clearly indicates that many people find it useful, but you don’t need to get hung up on that particular pairing of terms or look for a deeper meaning to understand what it is. “Artificial intelligence” is almost like a nickname or a brand name; something understood by many to stand for something, rather than a precise description of any particular qualities.

[…]

The narrative slippage and metaphorical vagueness that many important people use when they talk about AI means it can be very difficult to know what they mean – which in turn makes it harder to keep them accountable or to ask precise, difficult questions.

When heroic words are used to describe technologies that operate on the horizon of hope and ambition, it can feel awkward to ask practical questions such as “what are you actually proposing?” and “how will it work?”, but real knowledge requires detail and specificity rather than waves of shock and awe. AI technologies are not actually myths and should not be discussed as such; they are real technologies that use data, hardware, and human skills to achieve their social, economic, environmental, political, and technological change.

Source: Careful Industries

Image: Teena Lalawat

Cheat on everything?

Auto-generated description: Four cartoon robots are working at laptops with AI on their chests.

Stephen Downes shares news that Cluely, a startup promising that you can “cheat on everything” is proving controversial. As he says, the company “leans heavily into the ‘cheating’ aspect of the service, which is producing a not unexpected visceral reaction on the part of pundits.”

I tried Rewind.ai (currently rebranding to ‘Limitless’) when Paul Stamatiou was a co-founder. Instead of talking about “cheating” and creating socially awkward videos, Rewind.ai talks of being a “personalized AI powered by everything you’ve seen, said, or heard.” Well, so long as it happens on your computer. Presumably these people don’t go outside.

In my experience, startups get attention and traction by being genuinely useful and unique (very rare!), because there’s a big name attached to them (common), or because they’re socially transgressive. It feels to me like we’re seeing more of the latter at the moment, including Mechanize which, somewhat laughably, believes that their “total addressable market” is “$60 trillion a year.”

That’s not to say that automation of many so-called “white collar” tasks isn’t possible or desirable. Just not by tech bros, thank you very much. I’d encourage you to read Fully Automated Luxury Communism for a more radical socialist look at how all this could play out.

On Sunday, 21-year-old Chungin “Roy” Lee announced he’s raised $5.3 million in seed funding from Abstract Ventures and Susa Ventures for his startup, Cluely, that offers an AI tool to “cheat on everything.”

The startup was born after Lee posted in a viral X thread that he was suspended by Columbia University after he and his co-founder developed a tool to cheat on job interviews for software engineers.

That tool, originally called Interview Coder, is now part of their San Francisco-based startup Cluely. It offers its users the chance to “cheat” on things like exams, sales calls, and job interviews thanks to a hidden in-browser window that can’t be viewed by the interviewer or test giver.

Cluely has published a manifesto comparing itself to inventions like the calculator and spellcheck, which were originally derided as “cheating.”

Source: TechCrunch

Image: Mohamed Nohassi

These other, really important things intrude on my thinking and distract me

Auto-generated description: A notebook with a motivational quote about choices and realities is open next to a pen on a wooden surface.

The latest issue of New Philosopher magazine is about ‘choice’ and features a wonderful interview with Barry Schwartz, who is the Darwin Cartwright Emeritus Professor of Social Theory and Social Action at Swarthmore College. He’s the author of The Paradox of Choice: Why More Is Less which I’ve added to my reading list.

I want to excerpt a couple of parts which I think are particularly insightful. The first is about how he reduced the assessment burden on young people, who he believes suffer from a greater decision burden than previous generations.

Zan Boag: I recall in one of your talks, you mentioned that it came as something of a revelation to you when you realised students simply didn’t have as much time as students in the past.

Barry Schwartz: That was my interpretation.

What I realised, or what I thought, I never gathered data on this in any official way, but when I went to school, so many of the really important decisions we face in life were essentially made for us. People were not plagued by questions of sexual identity, weren’t plagued by questions about what their romantic life should look like. Should I have a girlfriend? The default was yes. Should I get married? The default was yes. When should I get married? Soon as I graduated from college. That was the default, and so on. And so there were still issues like, how do I find the right person?

But it wasn’t the case that every last hour of your daily life was consumed by a need to focus on doing studies without having these other, really important things intrude on my thinking and distract me. Well, this was much less true for my children and it is ever so much less true for my grandchildren.

The second excerpt is the follow-up to the question about how problematic it is to be a ‘maximizer’ in life. I’d usually use the term ‘perfectionist’ and have certainly had to overcome this tendency in myself, as it just makes one miserable. As Schwartz points out, as you get older, you have to come to terms with the fact that you have chosen certain options instead of others, and to be satisfied with the way things are, rather than how they could have been.

Zan Boag: It makes it particularly difficult with these big life decisions, whether it’s jobs, where we live, or partners, because we’re faced with so much choice. People can always wonder about the life they could have led had they made a different decision – say to pursue writing instead of banking; move to San Francisco instead of Sydney; ballroom dancing over Taekwondo. They’re making choices that then will affect the way they lead their lives. Let’s call this a phantom life, the ‘other’ life. How can people find satisfaction with their choices when there are so many available, and the choices you make will often seem like the incorrect ones? How can they find some sort of satisfaction?

Barry Schwartz: I think in the book that I wrote, which by the way, as I told you in an email, I’m about to start writing a new edition of, I make some suggestions, but I think the truth of the matter is that it is very hard to shut off these enemies of satisfaction in the modern world. What we’re talking about, and what I wrote about, is rich society’s problem.

Most people in the world don’t have the problem that there are too many options. They have the opposite problem. But if you happen to live in a part of the world like you and I do, that is the problem. And we don’t have the tools for shutting it down. I make some suggestions, like limit the number of options you consider. Fine. I’m only going to look at six pairs of jeans. It’s one thing to say it and it is another thing to do it, and it’s still a hard thing to do and not be nagged by the knowledge that there are all these options out there that you didn’t look at.

It’s sort of like just quitting smoking. ‘Yeah, I’ll just quit smoking.’ Nice, easy to say, but really, really hard to do when you suffer at least initially when you quit smoking. And so, I think that you have to be prepared for a fair amount of discomfort and a lot of work to change your approach to making decisions, big ones or small ones.

It’s not a surprise to me that young people are in such bad shape because one of the things that we found is that the younger you are, the more likely you are to be a maximizer in decisions. I think one of the things that you learn as you age is that good enough is almost always good enough. But you don’t see too many 20-year-olds who think that. Experience teaches you that good enough is good enough.

After suffering for a generation or so, you settle into a life where you’re satisfied with good enough results of your decisions. But meanwhile, that’s 20 or 30 years of suffering. And what I think… I don’t know if you’re familiar with this somewhat controversial argument about what social media is doing to the welfare of young people.

Source: New Philsoopher: Choice

Image: Elena Mozhvilo

In some ways, FOMO is a philosophical insight

Auto-generated description: A person is sitting on the steps of a wide, empty escalator.I’ve Laura Hilliger to thank for pointing me towards The Gray Area podcast, which takes “a philosophy-minded look at culture, technology, politics, and the world of ideas.” So it fits hand-in-glove with what I discuss here on Thought Shrapnel.In this particular episode, host Sean Illing talks with Kieran Setiya about middle-age, mid-life crises, and generally takes a philosophical look at what’s going on when people reach their forties. Being the ripe old age of 44, this is absolutely in my interest zone.What follows is my transcription (via Sonix.ai)

Sean Illing (SI): One of the things about life that appears to be hard is middle age. And you and you wrote a book about midlife crises. How do you define a midlife crisis?

Kieran Setiya (KS): Actually, kind of like the self-help movement, midlife crisis is one of those funny cultural phenomena that has a particular date of origin. So in 1965, this Canadian psychoanalyst, Elliot Jacques, writes a paper, ‘Death and the Midlife Crisis’. And that’s the origin of the phrase. And he is looking at patients and also, in fact, the lives of creative artists who experience a kind of midlife creative crisis. So it’s people in their late thirties. I think the stereotype of the midlife crisis is that it’s a sort of paralysing sense of uncertainty and being unmoored. Nowadays, I think there’s been a kind of shift in the way people think about the midlife crisis, that people’s life satisfaction takes the form of a kind of gentle U-shape that basically, even if it’s not a crisis, people tend to be at their lowest ebb in their forties. And this is men and women, it’s true around the world to differing degrees, but it’s pretty pervasive. So I think nowadays, often when people like me talk about the midlife crisis, what they really have in mind is more like a midlife malaise. It may not reach the crisis level, but there seems to be something distinctively challenging about finding meaning and orientation in this midlife period in your in your forties.

SI: Well, I’m 42. I just turned 42. It sounds like I’m right in the middle of my midlife crisis.

KS: You’re, you know, not everyone has it, but you’re predicted to hit it. Yes.

SI: Yikes. Well, what is it about midlife that generates all this anxiety and disturbing reflection?

KS: I think really there are many midlife crises. It’s not just one thing. I think some of them are looking to the past. So there’s there’s regret. There’s the sense that your options have narrowed. So whatever space of possibilities might have seemed open to you earlier, whatever choices you’ve made, you’re at a point where there are many kinds of lives that might have been really attractive to you, that it’s now clear to you in a vivid sort of material way that you can’t live. So there’s missing out. There’s also regret in the sense of things have gone wrong in your life. You’ve made mistakes, bad things have happened, and now the project is, how do I live the rest of my life in this imperfect circumstance? The dream life is off the table for most of us. And then I think there’s also things that are more present, focused. So often people have a sense of the daily grind being empty, and that’s partly to do with so much of it being occupied by things that need to be done, rather than things that make life seem positively valuable. It’s just one thing after another, and then death starts to look like it’s at a distance that you can measure in terms you kind of really palpably understand. Like you, you have a sense of what a decade is like, and there’s only three or four left at best.

SI: The thing about being young is the future is pure potential. Ahead of you is nothing but freedom and choices. But as you get older, life has a way of shrinking. Responsibilities pile up. You get trapped in the consequences of the decisions you’ve made, and the feeling of freedom dwindles. That’s a very difficult thing to wrestle with.

KS: I think that’s exactly right. I mean, part of what’s philosophically puzzling about it is that it’s not news that in a way, whatever your sense of the space of options was when you were, say, 20, you knew you weren’t going to get to do all of the things. So there’s a sense in which it’s kind of puzzling that when at 40, even if things go well, you didn’t get to do all of the things, that’s not news. You knew that wasn’t going to happen. What it suggests, and I think this is a kind of philosophical insight, is that there is a profound difference between knowing that things might go a certain way, well or badly, and knowing in concrete detail how they went well or badly. And that’s something that I think we learn from this transition that we make in midlife, that the kind of pain of just discovering the particular ways in which life isn’t everything you thought it might be, even though you knew all along that it couldn’t be everything you hoped it might be. That suggests that there’s a certain aspect of our emotional relationship to life that is missed out. If you just ask in abstract terms, what will be better or worse, what would make a good life? And so I think philosophy needs to kind of incorporate that kind of particularity, that kind of engagement with the texture of life in a way that philosophers don’t always do. I mean, I think there’s another thing philosophy can say here that’s more constructive, which is part of the sense of missing out has to do with what philosophers call incommensurable values.

The idea that, you know, if you’re choosing between $50 and $100, you take the hundred dollars and you don’t have a moment’s regret. But if you’re choosing between going to a concert or staying home and spending time with your kid, either way, you’re going to miss out on something that is sort of irreplaceable, and that’s pretty low stakes. But one of the things we experience in midlife is all the kinds of lives we don’t get to live that are different from our life, and there’s no real compensation for that, and that can be very painful. On the other hand, I think it’s useful to see the flip side of that, which is the only way you could avoid that kind of missing out, that sense that there’s all kinds of things in life that you’ll never get to have. The only way you could avoid that is if the world was suddenly totally impoverished, a variety, or you were so monomaniacal you just didn’t care about anything but money, for instance, and you don’t really want that. So there’s a way in which this sense of missing out, the sense that there’s so much in the world will never be able to experience, is a manifestation of something we really shouldn’t regret and in fact, should cherish, namely, the evaluative richness of the world, the kind of diversity of good things. And there’s a kind of consolation in that, I think.

SI: So is that to say that FOMO is is always and everywhere a philosophical error, or is it actually valid?

KS: In some ways, I think it’s a philosophical insight. In a way, I think there’s a kind of existential FOMO is part of what we have in midlife, or sometimes earlier, sometimes later. But I think that sense that it really is true that we’re missing out on things and that there’s no substitute for them. That’s really true. The kind of rejoinder to FOMO is, well, imagine there weren’t any parties you didn’t get to go to. That wouldn’t be good either, right? You want there to be a variety of things that are actually worth doing and attractive. We want that kind of richness in the world, even though one of the inevitable consequences of it is that we don’t get to have all of the things.

SI: One of the arguments you make is how easily we can delude ourselves when we start pining for the roads not traveled in our lives. And, you know, you think, what if I really went for it? What if I tried to become a novelist or a musician, or join that commune, or, I don’t know, pursued whatever life fantasy you had when you were younger? But if you take that seriously and consider what it really means, you might not like it because the things you value the most in your life, like, say, your children, well, they don’t exist if you had zigged instead of zagging 15 or 20 years ago. And that’s what it means to have lived that alternative life. And I guess it’s helpful to remember that sometimes, but it’s easy to forget it because you just you’re imagining what you don’t have.

KS: This is, again, about the kind of danger of abstraction that, in a way, philosophy can lead us towards this kind of abstraction, but it can also tell us what’s going wrong with it. So the thought I could have had a better life, things could have gone better for me. It’s almost always tempting and true. But when you think through in concrete particularity, what would have happened if your failed marriage had not happened? Often the answer is, well, I would never have had my kid or I would never have met these people. And while you might think, yeah, but I would have had some other unspecifiable friends who would have been great and some other unspecifiable kid who would have been great. I think we rightly don’t evaluate our lives just in terms of those kinds of abstract possibilities, but in terms of attachments to particulars. And so if you just ask yourself, could my life have been better. You’re kind of throwing away one of the basic sources of consolation. A rational consolation, I think, which is attachment to the particularity of the good things, the good enough things in your own life, even if you acknowledge that they’re not perfect and that there are other things that could have been in a certain way better.

SI: This is why I always loved Nietzsche’s idea of amor fati, this notion that you have to say yes to everything you’ve done and experienced, because all the good and bad in your life is part of this chain of events. And if you alter any of those events at any point in the chain, you also alter everything else that followed in unimaginable ways.

KS: I mean, I do think there’s a profound source of affirmation there. I think my hesitation is just that. It’s not that all the mistakes that we make, or the terrible things that happen to us, are redeemed by attachment to the particulars of our lives. It’s that there’s always this counterweight. At the very worst, we’re going to end up with some kind of ambivalence. And that’s better than than the situation of mere unmitigated regret. But it’s not quite the full embrace of life that a certain kind of of philosophical consolation might have given us.

Source: The Gray Area &em; Halfway there: a philosopher’s guide to midlife crisesImage: Alejandro G.

A sense that one has completed, with digital certainty, a task whose form may or may not have been made clear from the outset

Auto-generated description: A person with headphones is smiling and using a keyboard and mouse at a computer desk in a dimly lit room with other people around.

Stephen Downes brought my attention to a post on the website LessWrong, which, as he points out, is a prime (and increasingly rare case) of the comments section being more interesting than the main content itself.

One of the commentators brings up the work of David Golumbia who passed away a couple of years ago. Golumbia wrote an article which questioned what gamers are doing when they’re gaming, especially with role-playing games (RPGs) and first-person shooters (FPS).

The philosopher Ludwig Wittgenstein famously pointed out how difficult it is to define what a ‘game’ is. Many things can be games or game-like, but trying to neatly categorise what makes them so is seemingly impossible. Do games have to be competitive? No. Do games have to be fun? Well… no. And so on.

There’s a lot to think about in the Golumbia article, and (for once!) I’m going to set aside the very pointed critique of the capitalist element and the power dynamic. Instead, I’ll excerpt the part about games providing “the human pleasure taken in the completing of activities with closure and with hierarchical means of task measurement.”

For me, personally, most of my gaming sessions are usually with and/or against other human players. For example, on a Sunday night, my gaming crew is enjoying Payday 3 a game about robbing, stealing, and looting. There are aims and objectives, and tasks to complete and check off. It’s satisfying. Now I know why.

If we cast aside for a moment the generic distinction according to which programs like WoW, Halo, and Half Life are games while Microsoft Excel, Microsoft Word, and Adobe Photoshop are “productivity tools,” it becomes clear that the applications have nearly as much in common as makes them distinct. Each involves a wide range of simulations of activities that can or cannot be directly carried out in physical reality; each demands absorptive, repetitive, hierarchical tasks as well as providing means for automating and systematizing them. Each provides distinct and palpable feelings of pleasure for users in any number of different ways; this pleasure is often of a type relating to some kind of algorithmic completeness, a “snapping” sense that one has completed, with digital certainty, a task whose form may or may not have been made clear from the outset (finishing a particular spreadsheet or document, completing a design, or finishing a quest or mission). In every context in which these activities are completed, whether that context is established by the computer or by people in the physical world, there is indeed some sense of “experience” having been gained, listed, compiled by the completion of a given task. Arguably, this is a distinctive feature of the computing infrastructure: not that tasks were not completed before computers (far from it) but rather that the digitally-certain sense of having completed a task in a closed way has become heightened and magnified by the binary nature of computers.

What emerges as a hidden truth of computer gaming — and no less, although it may be even better hidden, of other computer program use — is the human pleasure taken in the completing of activities with closure and with hierarchical means of task measurement. Again, this kind of pleasure certainly existed before computers, but it has become an overriding emotional experience for many in society only with the widespread use of computers. A great deal of the pleasure users get from WoW or Half Life, as from Excel or Photoshop, is a digital sense of task completion and measurable accomplishment, even if that accomplishment only roughly translates into what we may otherwise consider intellectual, physical, or social goal-attainment. What separates WoW or Half Life from the worker’s business world is thus not variability or “give” but rational certainty, the discreteness of goals, the general sense that such goals are well-bounded, easily attainable, and satisfying to achieve, even if the only true outcome of such attainment is the deferred pursuit of additional goals of a similar kind.

[…]

At the very least, WoW and Half Life, and their cohort are therefore not games in the sense to which we have become accustomed. It seems clear that we call these programs “games” because of the intense feelings of pleasure experienced by players when we engage with them and because they appear on the surface not to be involved in the manipulation of objects with physical-world consequences. On reflection, neither of these facts proves very much… And the fact that computer games are pleasurable cannot, by itself, furnish grounds for calling them games: after all, games constitute only a part of those activities in the world that give us pleasure.

[…]

Can there be any doubt about the potential attractiveness of an apparently human world in which we understand clearly how to attain power, what to do with it, and that the rules by which we operate do not change or change only by explicit order? The deep question such games raise is what happens when people bring expectations formed by them into the world outside.

Source: Golumbia, D. (2013). ‘Games Without Play’. New Literary History, Vol. 40, No. 1, Play (Winter, 2009), pp. 179-204. Available at: https://diglit.community.uaf.edu/wp-content/uploads/sites/511/2015/01/Games_without_Play.pdf

Image: ELLA DON

A lot of strange things start to make more sense — sometimes distressingly so

Auto-generated description: Five cherubs with small wings are carrying a large can of condensed milk while wearing colorful sashes.

I was listening to Helen Beetham talk with Audrey Watters on her imperfect offerings podcast, when Audrey mentioned a Bloomberg piece which I’ve excerpted below. Essentially the economy becomes distorted when all of the money is at the top of society and everything is being produced to fit the needs of rich people.

This chimes with what economist Gary Stevenson calls ‘The Squeeze’ which I wrote about recently. While the article is about the US, which is a more unfettered free market economy, the same is also likely to be happening at different rates to other western economies.

The question, of course, is what we do about it. I mean, to be blunt, we can either tax the rich or end up eating them.

Recent economic headlines do not add up to a coherent picture: Since 2020, Americans have spent lavishly on discretionary goods and services, even as the cost of necessities has soared. Consumer debt has ballooned right along with prices, and Americans are now defaulting on their credit cards at rates unseen since the Great Recession. Wages growth has been strong, but inflation has thwarted its ability to help most Americans get ahead. So who’s booking all those first-class airline seats and tables at fancy restaurants? Why are tickets for concerts and major sporting events so expensive and also so sold out?

A recent analysis of consumer spending from Moody’s Analytics, first covered in the Wall Street Journal, provides an answer: Rich people really are just firing a cash cannon into the consumer market. The wealthiest 10% of American households—those making more than $250,000 a year, roughly—are now responsible for half of all US consumer spending and at least a third of the country’s gross domestic product. If you keep that in mind, a lot of strange things start to make more sense—sometimes distressingly so.

[…]

Such a high concentration of financial resources presents a whole host of risks and complications, including general economic fragility. If the extreme spending habits of a small group of people are what’s keeping a large portion of the economy churning, then that group of people also has an outsize ability to bring everyone else down with them.

[…]

When you put a huge proportion of a nation’s total resources in a small number of hands, that distortion also plays out in the everyday economy. Consumer-facing companies want earnings growth and need ways to hold on to their profit margin if components or labor become more expensive. An easy way to do that is by going upmarket to find buyers who are spending freely. You can see how this has played out in the car market: Automakers have pushed to develop more of the big, pricey SUVs that wealthier buyers prefer and devoted fewer resources to smaller, more affordable models. That’s helped push the average sale price of new cars up more than 50% since 2014, according to a Cox Automotive analysis of data from the Bureau of Labor Statistics. The average new car in the US now costs almost $50,000. When the math on producing goods and services only pencils out when you’re selling to the rich, it doesn’t just change the availability of designer handbags or hotel suites; it affects how entire industries organize themselves.

[…]

Letting so many of the country’s economic resources accrue to so few people. risks a lot more than just the economy—it eats away at social cohesion in ways that have leaked into other areas of American life and politics. It breeds distrust and recrimination among individuals and groups of people, as well as toward the systems and institutions we’re supposed to trust to make society work in ways that are at least minimally fair. The end result is a combination of economic fragility and social disaffection that eventually even high earners might not be able to buy their way out of.

Source: Bloomberg

Archive link (no paywall): Archive.is

Image: Boston Public Library

I've done this a couple of times before but this time feels slightly different

Auto-generated description: A stack of golden-brown pancakes is neatly arranged on a white plate.

Tom Watson, who apart from doing generally awesome stuff somehow also has time to star in a documentary about ultrarunning, saw a recent Thought Shrapnel post and wrote about what tech he’s using.

I need to do my own, and actually Tom’s post has made me realise the extent to which I’m dependent upon Google and, to a recent lesser extent, Perplexity.

Prompted by Doug… and a couple of the Colophon’s (new word for me!) by Matt and Steve I thought I’d outline “my stack”. I’ve done this a couple of times before but this time feels slightly different.

[A]s someone who has been gently prompting people to not be so beholden to Big Tech, to look more at Open Source, to think more ethically, and to at least consider European Alternatives, I feel I should at least discuss where I’m trying to do this, where I’m succeeding and where I’m often failing.

[…]

…I use AI for specific things when I think they will make something more effective. I’m therefore always looking for the best model, and best use case. And things change all the time. So I purposefully build specific components that allow me to easily switch models and providers. If there is one thing I’d advise when thinking about building AI into an organisation, it’s to ensure you aren’t creating provider lock in for yourself. Quite a few AI wrappers (tools that put some kind of front end onto a model) allow you to switch models. But not all. And if you are building yourself, there is a risk you just lock yourself into a depeciating model or a provider that just turns out to be mega shitty.

[…]

It’s not easy to avoid the big tech trap, but I think I’m doing ok. Also I’m not saying you definitely should, but I think you should at least consider what you use and what this means, and if you have principle maybe they should cost you something.

Source: Tomcw.xyz

Image: Matthias Reumann

It's much easier to go carless if your city has good public transit

Auto-generated description: A bar graph compares global average per capita emissions to projected and effective impacts of various behavioral changes in transportation, energy, and food, illustrating their potential to reduce emissions.

I’m a vegetarian who drives an electric vehicle (EV). In a few weeks' time, we’re getting a heat pump installed so that we can remove our gas boiler. These are all climate-positive things to do, and I’m trying to do my bit.

This article by the World Resources Institute shows how important it is that there is an infrastructure that enables individual decision-making to take place. For example, I’ve been vegetarian now for eight years, and it’s much easier to remove meat from your diet these days even than when I started to so in 2017. Likewise, because of investment in EV infrastructure, these days it’s unproblematic to own or lease an EV.

It’s interesting being an early-ish adopter of air source heat pump technology in the UK. The process is not as smooth as it could be, with our driveway having to be dug up to upgrade the total electricity supply capacity entering our property. So, although we have visited a couple of heat pumps and there is a government grant, it’s still more expensive, and involves more upheaval, than just getting another combi boiler.

Coupled with active hostility in some quarters, it’s a good example of how the Overton Window can apply to technology interventions and pro-climate lifestyle choices. That’s why, as well as making such choices ourselves, we should be aware of, and be advocating for, the systems within which those choices can be made easier.

Our data shows that pro-climate behavior changes, such as driving less or eating less meat, could theoretically cancel out all the greenhouse gas (GHG) emissions an average person produces each year — specifically among high-income, high-emitting populations.

But it also reveals that efforts focused exclusively on changing behaviors, and not the overarching systems around them, only achieve about one-tenth of this emissions-reduction potential. The remaining 90% stays locked away, dependent on governments, businesses and our own collective action to make sustainable choices more accessible for everyone. (Case in point: It’s much easier to go carless if your city has good public transit.)

[…]

We found that, in theory, shifting to 11 pro-climate behaviors we analyzed in the energy, transport and food sectors could reduce individuals' GHG emissions by about 6.53 tonnes per year. This would more than cancel out what an average person currently emits (about 6.3 tonnes per year). However, our data also shows that when people attempt these changes in the real world, without supportive systems, they typically only reduce emissions by about 0.63 tonnes yearly — just 10% of what’s theoretically possible.

It’s not that individual changes don’t matter; when someone switches to an electric vehicle (EV) or avoids a flight, they make a real impact. The problem is that without supportive infrastructure, policies or incentives (such as public EV chargers or financial subsidies), these programs struggle to drive the broad-based change the world really needs.

Source & image: World Resources Institute

Workers of the future must be emboldened to eschew wages in favour of dropping into the abyss

Auto-generated description: A shadow of a person wearing a hat is cast on grass, with two dandelions aligning as eyes.

Note: as regular readers will be aware, my habit is to quote part of the excerpt as the title of Thought Shrapnel posts. I, personally, am not advocating for abyss-directed activity.

I’m not sure who the cipher “aethn” behind this essay is, but this is a pretty standard argument dressed up in fancy (and somewhat eschatological) language. I’ve tried to excerpt the main thrust, which is: LLMs are getting better and seem to be starting to replace some lower level jobs; this will continue and cause a rupture in the fabric of society. Somehow we need to prepare for this.

While I do believe that AI is somewhat qualitatively different from previous technological inventions, I’m also a student of history and so know that disruption doesn’t happen everywhere all at once. As William Gibson is famously quoted as saying, “The future is already here — it’s just not very evenly distributed.” Just because some people, like the author (and like me), are messing around with LLMs and finding them powerful for our work, doesn’t mean that everyone else is.

Essays like this tend to miss out existing inequality in a rush to talk about future inequalities. And, it has to be said that our western economic and political structures have proven remarkably resilient to a number of shocks over the past centuries. Fredric Jameson noted that, “it is easier to imagine the end of the world than to imagine the end of capitalism.” I’d note that, sometimes, one person’s “violent revolution” is another person’s evolution of capitalism into a new form.

We are at the precipice of a revolution more violent than the Industrial Revolution. This revolution is not about the typical vulgar parochial anxieties on job security—although it is part of it—it’s about a violent upheaval of the very socio-economic fabric in the way our world is organized.

[…]

The latest innovations go far beyond logarithmic gains: there is now GPT-based software which replaces much of the work of CAD Designers, Illustrators, Video Editors, Electrical Engineers, Software Engineers, Financial Analysts, and Radiologists, to name a few. This radical automation exists without any sophisticated fine tuning or training.

[…]

The frequent naive view of the amount paid in a wage is that it’s proportional to the difficulty of the job. If this were the case then certainly all wages will substantially be decreased with the advent of LLMs and that regular economic structure will be maintained. Instead, through the normal polemics we find that’s not the case.

Wage is instead best viewed from the perspective of the profit maximizing economic agent, as in what motivates such an agent to accept a wage from another party at all. If such an agent were able to endeavor alone and capture all of the value from its enterprise it would do so. […] [T]he agent must determine if an offered wage is greater than the expected value of its solo enterprise. We then find that wage must be greater than the expected value of the opportunity cost of the uncaptured labor value incurred due to employment. For much knowledge based work, this is acute since with the same skills needed for employment one can make a competing enterprise to their employer and capture all the value. Other professions require you to have large capital to do so, so the opportunity cost is either non-existent if you cannot access that capital or further discounted by the financing cost and the risk.

[…]

Generally aligning a team on a common product requires expensive payments to each member to provide contributions. Indeed, the early-stage venture capital industry relies on this fact.

The latest LLMs make such a provision completely redundant. The proprietor of today can supervise and delegate tasks required to build a product to the latest GPTs in virtually any knowledge field. The single proprietor now has the efficacy of maybe a team of 6-10 conservatively and further is able to produce even higher quality work.

[…]

One may demur and claim there simply won’t be enough knowledge-based services in demand, however we instead find a Jevon’s paradox, where demand for these services increases. The reduction in costs of producing the same services will make them more ubiquitous and bespoke at even a per individual basis.

[…]

Corporations whose products have no moat by economic scale or network will be forced into specializing their products as they surrender market share to the sea of companies. In the Deleuzian fashion, proprietors can build almost as quickly as they can imagine, as predicted, rattling the foundations of the economic order itself.

[…]

The social order must change drastically to a future where institutions are no longer designed to feed corporations future employees, they merely won’t harbor that demand. Many existing knowledge based workers will largely have no choice but to engage in enterprise. Educational systems must adapt and accommodate this new entrepreneurial exigency for labor in the new economic order. Workers of the future must be emboldened to eschew wages in favor of dropping into the abyss in order to have any meaningful income.

Source: aethn’s essays

Image: Maksym Mazur

I have to acknowledge and accept the fact that I use tools built by awful people to create beautiful things.

Auto-generated description: Colorful, abstract light patterns create a textured, vibrant display.

As the author of this article, Ankur Sethi, ponders, why is it that as people interested in technology we often don’t hold the rest of our consumption and use to the same standards as the digital world? Do we change where we buy our clothes and choose which car we drive based on similar ethical standards to those we use when we select our operating systems and digital platforms?

It’s a reminder, I guess, that there’s no ethical consumption under capitalism. But I, for one, can still try.

We’ve structured our society so that the best products and services are made by the worst people in the world. Of course you can deliver packages earlier than everyone else if you overwork your employees. Of course you can sell the fastest computers at the cheapest prices if you keep moving your manufacturing operations to countries with the worst labor and environmental laws. Of course you can build the smartest AI models if you slurp up everybody else’s intellectual property without asking for consent first.

It makes little difference to how tech businesses operate when a smattering of concerned individuals opt out of using their products and services. Things will only change when democratically elected governments across the world step in with regulation, drag Big Tech through the courts, and fine them billions of dollars.

Things will only change when being an asshole stops being a competitive advantage.

Until that day arrives, I have to learn to live in a state of tension with my tools. I have to acknowledge and accept the fact that I use tools built by awful people to create beautiful things.

Source: Ankur Sethi

Image: Sigmund

The problem is not just that the Gmail team wrote a bad system prompt. The problem is that I'm not allowed to change it.

A photographic rendering of a simulated middle-aged white woman against a black background, seen through a refractive glass grid and overlaid with a distorted diagram of a neural network.

I’ve often used the metaphor of the ‘horseless carriage’ in my work around new literacies, making the McLuhan-esque point that people tend to use existing mental models of technology to understand new forms. So, for example, if you remember the original iPad, there were plenty of ‘skeuomorphic’ touches, such as ebooks having fake pages either side of the ones you’re reading.

This article talks about generative AI, and in particular Google’s choices when it comes to how they’ve chosen to integrate it into GMail. The author, Pete Koomen, includes some lovely little interactive elements showing the differences between how Gemini (Google AI model) performs things by default, and how he would like it to behave.

The System Prompt explains to the model how to accomplish a particular set of tasks, and is re-used over and over again. The User Prompt describes a specific task to be done.

[…]

The problem is not just that the Gmail team wrote a bad system prompt. The problem is that I’m not allowed to change it.

[…]

As of April 2025 most AI still apps don’t (intentionally) expose their system prompts. Why not?

Here’s the insight, and the reason why I enjoy ‘vibe coding’ so much (i.e. creating web apps using a conversational interface):

The modern software industry is built on the assumption that we need developers to act as middlemen between us and computers. They translate our desires into code and abstract it away from us behind simple, one-size-fits-all interfaces we can understand.

The division of labor is clear: developers decide how software behaves in the general case, and users provide input that determines how it behaves in the specific case.

By splitting the prompt into System and User components, we’ve created analogs that map cleanly onto these old world domains. The System Prompt governs how the LLM behaves in the general case and the User Prompt is the input that determines how the LLM behaves in the specific case.

With this framing, it’s only natural to assume that it’s the developer’s job to write the System Prompt and the user’s job to write the User Prompt. That’s how we’ve always built software.

But in Gmail’s case, this AI assistant is supposed to represent me. These are my emails and I want them written in my voice, not the one-size-fits-all voice designed by a committee of Google product managers and lawyers.

In the old world I’d have to accept the one-size-fits-all version because the only alternative was to write my own program, and writing programs is hard.

In the new world I don’t need a middleman tell a computer what to do anymore. I just need to be able to write my own System Prompt, and writing System Prompts is easy!

Source: Pete Koomen

Image: Alan Warburton

How times change

Auto-generated description: A cartoon shows a dog sitting in a chair reading a newspaper with a cup of coffee, while two people comment about taking house training too far.

Earlier this week, I had the pleasure of attending the Lit & Phil in Newcastle with my mother where we spent an evening with David Haldane, world-renowned cartoonist. He’s just started a new gig (at the age of 70!) for The Observer.

He outlined how technology had changed over the years: at the start his cartoons would be driven to the train station by his wife, taken on the last train to London, where it would then be taken by courier to the offices of the newspaper or magazine. Then, when fax machines came in, you weren’t sure that what you were sending would actually go to the right place, so sometimes people would be looking all around the place for things he’d sent through.

Much more recently, he mentioned how, with a cut-off date of 21:30, he’d been asked 10 minutes beforehand to redo a cartoon. He obliged, sent it through digitally — and by the time he’d tidied up his stuff and gone downstairs, there was his cartoon on the front page of the “tomorrow’s newspaper front pages” section of the TV news!

How times change.

Participants remembered fake headlines more than real ones regardless of the political concordance of the news story

Auto-generated description: A person sits on a stool in a dimly lit room, holding a burning newspaper in front of their face.

You Are Not So Smart (YANSS) is a great podcast, and one of the recent episodes is right up my street. Based around this paper by disinformation researchers, it introduces the notion of _dis_confirmation bias.

Essentially, they did rigorous research in the US which showed that people prefer concordance with their existing belief systems over conformance with truth. I was expecting to hear philosopher W.V. Quine referenced in terms of his metaphor of us having a ‘web of belief’. Those beliefs that are toward the periphery of the web are more easily jettisoned than those nearer the centre, which are core to our identity.

Anyway, it’s a really interesting episode, especially given that most people think the problem is ‘fake news’. That’s half the problem: the other part is getting people to prefer (and share) true news rather than random stuff that happens to cohere with their existing beliefs.

Resistance to truth and susceptibility to falsehood threaten democracies around the globe. The present research assesses the magnitude, manifestations, and predictors of these phenomena, while addressing methodological concerns in past research. We conducted a preregistered study with a split-sample design (discovery sample N = 630, validation sample N = 1,100) of U.S. Census-matched online adults. Proponents and opponents of 2020 U.S. presidential candidate Donald Trump were presented with fake and real political headlines ahead of the election. The political concordance of the headlines determined participants’ belief in and intention to share news more than the truth of the headlines. This “concordance-over-truth” bias persisted across education levels, analytic reasoning ability, and partisan groups, with some evidence of a stronger effect among Trump supporters. Resistance to true news was stronger than susceptibility to fake news. The most robust predictors of the bias were participants’ belief in the relative objectivity of their political side, extreme views about Trump, and the extent of their one-sided media consumption. Interestingly, participants stronger in analytic reasoning, measured with the Cognitive Reflection Task, were more accurate in discerning real from fake headlines when accurate conclusions aligned with their ideology. Finally, participants remembered fake headlines more than real ones regardless of the political concordance of the news story. Discussion explores why the concordance-over-truth bias observed in our study is more pronounced than previous research suggests, and examines its causes, consequences, and potential remedies. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

Source: YANSS 307 – Why resistance to true news that you would rather not believe can be stronger than susceptibility to fake news that you wish was true

Image: Nijwam Swargiary

These parts would end up in a landfill otherwise

Auto-generated description: A person is repairing or examining a smartphone on a workbench with various tools and components around them.

‘Degrowth’ is an idea which makes perfect sense in a resource-limited world. Yet the framing remains problematic and doesn’t seem to chime with the current Overton window.

Degrowth’s main argument is that an infinite expansion of the economy is fundamentally contradictory to the finiteness of material resources on Earth. It argues that economic growth measured by GDP should be abandoned as a policy objective. Policy should instead focus on economic and social metrics such as life expectancy, health, education, housing, and ecologically sustainable work as indicators of both ecosystems and human well-being.

I bring this up because of an article I saw in The Verge about ‘Frankenstein’ laptops being created from salvaged parts in India. Done properly, this is degrowth in action: creating value by repurposing e-waste into functional machines. It reminds me of a scene from Star Wars: Episode IV – A New Hope where Luke Skywalker’s uncle purchases R2-D2 and C-3PO from the Jawas, who resell scrap, droids, and technology they salvage from the environment and crashed ships.

One of the reasons Trump wants a deal with Ukraine and has threatened both Canada and Greenland is due to access to minerals. In a high-tariff, protectionist world, degrowth helps us resist tyrants and create economies that are not based on endless growth. Sometimes that number has to stop going up and to the right.

Across India, in metro markets from Delhi’s Nehru Place to Mumbai’s Lamington Road, technicians like Prasad are repurposing broken and outdated laptops that many see as junk. These “Frankenstein” machines — hybrids of salvaged parts from multiple brands — are sold to students, gig workers, and small businesses, offering a lifeline to those priced out of India’s growing digital economy.

[…]

Manohar Singh, the owner of the workshop-slash-store where Prasad works, flips open a refurbished laptop while sitting on a rickety stool. The screen flickers to life, displaying a crisp image. He smiles — a sign that another machine has been successfully revived.

“We literally make them out of scrap! We also take in second-hand laptops and e-waste from countries like Dubai and China, fix them up, and sell them at half the price of a new one,” he explains.

“A college student or a freelancer can get a good machine for INR 10,000 [about $110 USD] instead of spending INR 70,000 [about $800 USD] on a brand-new one. For many, that difference means being able to work or study at all.”

[…]

[M]any repair technicians have no choice but to rely on informal supply chains, with markets like Delhi’s Seelampur — India’s largest e-waste hub — becoming a critical way to source spare parts. Seelampur processes approximately 30,000 tonnes (33,069 tons) of e-waste daily, providing employment to nearly 50,000 informal workers who extract valuable materials from it. The market is a chaotic maze of discarded electronics, where workers sift through mountains of broken circuit boards, tangled wires, and cracked screens, searching for usable parts.

Farooq Ahmed, an 18-year-old scrap dealer, has spent the last four years sourcing laptop components for technicians like Prasad. “We find working RAM sticks, motherboards with minor faults, batteries that still hold charge and sell it to different electronic workshops,” he says. “These parts would end up in a landfill otherwise.”

[…]

Despite the dangers, the demand for Frankenstein systems continues to grow. And as India’s digital economy expands, the need for such affordable technology will only increase. Many believe that integrating the repair sector into the formal economy could bring about a win-win situation, reducing e-waste, creating jobs, and making technology more accessible.

Sources: Wikipedia & The Verge

Image: Kilian Seiler

You don't fit in. And that is amazing.

Auto-generated description: A group of LEGO stormtroopers surrounds a LEGO Pierrot clown figure.

A few months ago, when we had basically no work on, I grumpily applied for some jobs. I had a couple of interviews, one of which turned into some consultancy work. But I didn’t get any of them, which on the one hand isn’t very validating, but on the other is secretly very relieving.

Aristotle said that you can’t make a decision as to whether someone is ‘happy’ until after they have died. You need to see the full arc. The same is true of employment: how it all ends is an important factor as to whether you were ‘successful’ or ‘enjoyed’ it. I’ve only had two jobs that ended well. This is because of the mantra I have tried to instil in our two teenagers: people can only treat you the way you let them. I’m not sucking up to anyone, and I’m not changing the way that I think, work, and organise my time to fit a corporate ‘system’.

Which brings us to Mike Monteiro’s post. You should read the whole thing, as he weaves in recollections about being left-handed, neurodivergence, and his own career. The parts I’ve picked out here reflect some of my own experience over the past 22 years of (what some may call) a career. Courtesy of Dan Sinker, I have a Marginally Employed patch below my monitor to remind me I chose this because this is the way. For me, anyway.

Very early on in my “career” (LOL) I decided I wasn’t drift compatible with working in large organizations. I just didn’t enjoy it. Which isn’t to say that I can’t work with people, there are people I absolutely love working with. […] I didn’t like working in large organizations because the larger an organization is, the more likely it is to have a certain way of doing things. Which kinda makes sense, because if you have thousands of people doing things that are supposed to be interconnected, you kinda want a process that everyone can follow, or there’s complete chaos. (The fact that most organizations attempt to do this and it still results in complete chaos is also interesting, but we’re not tackling that today.) The larger an organization becomes, the more it needs everyone to work and think the same way. That way, if it loses a worker, it’s that much easier to plug in a new worker. And while that way of working might make sense for the organization, it’s important that we also ask ourselves, as workers, if it’s working for us.

[…]

Since large organizations made me miserable, I decided to spend my career in small little studios, which tend to be a bit more supportive and even gravitate more towards people who don’t spin in the same direction. Possibly because they were all started by people who don’t spin in the same direction. Or at the same speed. Or spin at all.

[…]

I try to be really careful about how I dole out advice to people. There is no system. There is no one way. There’s no guarantee that our brains will take us on the same journey. I’ll tell people about my own experience in doing something. I’ll tell you that we need to get from Point A to Point B. I’ll tell you how I’ve gotten there in the past, which you can use as a frame of reference, if that helps you, but then I want your brain to do its thing. Because your brain is mapping out a totally different landscape than mine is, and that is fascinating.

[…]

The world is full of… people who want to sell you Design Thinking™… and people who want to see everything spin the same way. They want order. They want sameness. But the only sameness they want is for you to be as miserable as they are. And they’re all miserable. They hate you because you’re a threat. You see what they don’t. You feel what they can’t. You can smell colors! You can read the stars. You see the connections that they can’t. You can paint something, with your own hands, that they have to fire up Three Mile Island to even attempt. You can change your body into what you need it to be. You can love who you love.

You don’t fit in.

And that is amazing.

Source: [Mike Monteiro’s Good News](buttondown.com/monteiro/…]

Image: Mulyadi

Obvious things are obvious if you think about them

Looking through a window at an Albert Einstein figurine

I’m sharing two articles together here because they help reframe a couple of things which are important to me. One is about political opinions and demographics, the other one is about meat-eating.

Let’s start with political opinions. The ‘received wisdom’ that older people are more conservative is based on a survivorship bias:

One of the abiding realities of our political era is a major generational split anchored on the right by disproportionately conservative seniors and on the left by disproportionately progressive millennials and post-millennials. This is often thought of as a perfectly natural, even inevitable, phenomenon: Young people are adventurous, open to new ways of thinking, and not terribly invested in the status quo, while old folks have time-tested views, assets they want to protect, and a growing fear of the unknown and unfamiliar.

[…]

But it is important to note that some generational disjunctions in political behavior are driven by demography. It’s well understood that millennials are significantly more diverse than prior generations. But there is something else driving the relative homogeneity of seniors: Poorer people are often hobbled by chronic illness, and succumb to premature death.

The other issue is around the common belief that prehistoric humans ate mainly meat. Of course, animal bones last a lot longer than plant grains, so just as we don’t have much physical evidence of wooden structures (as opposed to stone ones) we have a lot more bones than grains to base theories on.

A new archaeological study along the Jordan River, just south of northern Israel’s Hula Valley, sheds new light on the diets of early humans and challenges long-standing assumptions about prehistoric eating habits. The research shows that ancient hunter-gatherers relied heavily on plant foods, especially starchy varieties, as a key energy source. Contrary to the popular belief that early hominids primarily consumed animal protein, the findings reveal a varied plant-based diet that included acorns, cereals, legumes, and aquatic plants.

[…]

The research contradicts the prevailing narrative that ancient human diets were primarily based on animal protein, as suggested by the popular “paleo” diet. Many of these diets are based on the interpretation of animal bones found in archaeological sites, with plant-based foods rarely preserved.

Sources: New Yorker Magazine & SciTechDaily

Image: William Felipe Seccon

I just think that people who write about technology should have a disclaimer about the tech stack they use

Sign saying 'If you don't have control over the technology that runs your life, the devices and services that run your life, then your life will be run by other people using the computers'.

I sent Carole Cadwalladr’s latest TED Talk to people this week who may not otherwise understand what’s going on in the US. Big Tech companies like Google, Meta, and Apple are all in the US, and also… let’s not pretend a similar thing couldn’t happen in other countries. We should be ready.

In this post, Elena Rossini points out how “impossibly incongruous” it is that Cadwalladr uses Bluesky and Substack for her online presence, “two centralized services, owned or funded by questionable groups.”

The products and services we use matter. Not only to protect ourselves, our friends, and our families, but also in terms of resisting a dominant narrative and worldview. I saw that Matt Jukes recently added a colophon to his blog to explain not only his process of writing but the products he uses. I like that.

Carole Cadwalladr has my utmost admiration. The fiery presentation she gave at TED is not diminished by the tech stack she personally uses. I firmly believe everyone should watch her video - it’s digital literacy 101.

Still, I believe that if even Carole Cadwalladr - who recognizes the problem (the broligarchy) and speaks so eloquently against it - is ONLY using American VC-funded Big Tech platforms, her presence there is an implicit endorsement. And her audience will get the indirect message that compromises need to be made and it’s no big deal to use Broligarchs’ platforms because they may be the only solution to get one’s message out there.

[…]

When I learned about the doubling down by Substack founders - who refused to moderate or demonetize newsletters promoting hate speech - I moved away from the platform… and I unsubscribed from 40+ newsletters hosted there (including two paid newsletters). I admire Cadwalladr’s work and I would love to do a paid subscription to her blog - but I won’t as long as she’s on Substack. I am sure there are many people who feel the same way.

[…]

If I were her, I would set up a blog/newsletter on Ghost - with paid membership - and I would keep a Substack account, taking advantage of the Notes feature to share articles hosted on her hypothetical Ghost blog. The best of both worlds.

For social media, I would create an account on the Fediverse and use a tool like Buffer or Fedica to crosspost to multiple accounts.

[…]

I just think that people who write about technology should have a disclaimer about the tech stack they use - in order to see if they’re “walking the talk.” And if people who speak truth to power feel they need to be on VC-backed, centralized, for-profit social networks, sure no problem. But I believe that anyone speaking up against the broligarchy should be active on the Fediverse too - a galaxy of independent, free, open source networks that is not funded by billionaires or crypto bros.

Source: Elena Rossini

Image: Marija Zaric

This extension is the solution to becoming more European oriented

Screenshot of the Go European extension in action on the Nike website

There’s a growing movement in the communities of which I’m part to move off US infrastructure and away from US-owned companies. For obvious reasons. This browser extension is a good example of how that is being facilitated, by suggesting European alternatives.

Suggests European website alternatives to non-European websites.

This extension is the solution to becoming more European oriented. The extension provides European alternatives for the most used websites and services around the world wide web.

Key features:

  • Site Detection and Notifications
  • Automatically recognizes websites that have European alternatives
  • Badge counter shows the number of alternatives sites
  • Receive unobtrusive notifications about available alternatvies
  • Clean, modern UI with information about each alternatives
  • One-Click access to visit suggested sites

Source: Go European

Sprint goals suck too

Man working on laptop in front of a hearbeat-style graph

Back in about 2014, I remember Matt Thompson help bring in ‘heartbeats’ to the Mozilla Foundation. As he explained in this post for opensource.com a couple of years later, using that word instead of ‘sprint’ is useful because:

Heartbeats can create a great sense of purpose, and ebb and flow in your team. They can be set to any length—a week, two weeks, a month. It’s really just about bringing people together in a regular, predictable cycle, with a ritual and set of dance steps to ensure everyone’s on the same page, headed in the right direction, and learning and accomplishing important things together.

I was reminded of Matt’s work when I saw Steve Messer’s post about helping a GOV.‌UK team implement a new model for agile delivery. Similarly, he points out that you don’t need to do two-week sprints.

This is something that Laura and I have been discussing on a project we started last month with a new client. There’s an expectation these days that to work in an ‘agile’ way you have to do sprints. You can use them. But you don’t have to.

Traditional two-week sprints and Scrum provide good training wheels for teams who are new to agile, but those don’t work for well established or high performing teams.

For research and development work (like discovery and alpha), you need a little bit longer to get your head into a domain and have time to play around making scrappy prototypes.

For build work, a two-week sprint isn’t really two weeks. With all the ceremonies required for co-ordination and sharing information – which is a lot more labour-intensive in remote-first settings – you lose a couple of days with two-week sprints.

Sprint goals suck too. It’s far too easy to push it along and limp from fortnight to fortnight, never really considering whether you should stop the workstream. It’s better to think about your appetite for doing something, and then to focus on getting valuable iterations out there rather than committing to a whole thing.

Source: Boring Magic

Image: Matt Collamer

You don’t have to agree with this idea to see that it represents a very different way of thinking about equality

Red cross through familiar equality vs equity meme showing kid standing on boxes

I’ve always been a bit uneasy about the above meme (to which I’ve added a red cross). Thankfully, due to link to a blog post by Rob Farrow I’ve discovered why. In fact, it’s possibly the reason why the whole DEI thing has been so contentious.

It shouldn’t need saying, but people don’t read carefully and aren’t used to reading beyond headlines these days. So before continuing of course I believe in equality. The issue is with the woolly concept of ‘equity’. The article I’m citing is by Joseph Heath, a Professor of Philosophy at the University of Toronto. He writes as you’d expect such a person to write: clearly, but assuming a bit of a background in Philosophy. Thankfully, yours truly does have that background and is here to help 😉

The purpose of any good model is to present a simplified representation of reality, in order to accentuate crucial features and make them more analytically tractable. The question, therefore, is whether the kids on boxes provide a useful model for thinking about the sorts of distribution problems that arise in DEI contexts. Most egalitarian philosophers, I think, would say that it is a bad model.

I’ve taken the quotations out of order because the overall argument makes more sense when presented this way. So we start from the position that the kids on boxes meme isn’t particularly useful.

The contrast that is drawn in the meme, which was originally intended to illustrate the distinction between “equality of opportunity” and “equality of outcome,” captures the way that people used to think about issues of equality up until the late 1960s, before the publication of John Rawls’s A Theory of Justice in 1971. After that, pretty much everyone came to agree that the opportunity/outcome distinction was neither useful nor coherent. The really important question was not when one chose to equalize, but rather what one intended to equalize.

So we need to figure out what we’re ‘equalising’ here. Is it the number of boxes? Or the quality of view?

The most immediate problem with the meme is that it does not present an accepted definition of the term “equity,” but rather a stipulative redefinition, which does not correspond very well to how the term has historically been used… [T]he graphic was originally drawn to illustrate the contrast between equality of opportunity and equality of outcome. Later on, after it was reproduced umpteen times, someone changed the labels, and somehow the idea that “equality of outcome” should be called “equity” stuck.

To recap: we’ve got an outdated notion of ‘equality of opportunity’ vs ‘equality of outcome’ which has been made even more problematic by the meme relabelling the latter as ‘equity’. It’s not a defensible philosophical position, partly because ‘equity’ doesn’t have a universally accepted definition, and is usually seen as a looser standard that strict equality.

My suspicion is that when DEI ideas were first taking shape, people gravitated toward “equity” language precisely because it had this looseness about it. Because people are different (i.e. diverse), one should not expect perfect equality, but rather just equity. And for all I know, this may have been what the person who modified the kids on boxes meme was thinking, suggesting that the allocation of boxes to kids should be responsive to the different characteristics of the kids. The unfortunate result, however, is that instead of introducing a looser standard of equality, the meme wound up saddling DEI with a commitment to an extremely strict, controversial conception of equality (i.e. equality of outcome), which no reasonable person actually endorses as a general principle. Furthermore, this was not achieved through argument, but merely through persuasive definition.

And this, dear reader, is why Philosophy is such an important subject. If you don’t get these kinds of things right, then it has downstream implications. ‘Equity’ might seem like a reasonable thing to aim for, but if you don’t know what it means, then you’re going to run into trouble.

Setting aside these terminological issues and focusing on equality of outcome, the next big problem with the meme is that it commits DEI proponents to a conception of equality that is somewhere to the left of the most left-wing view defended by left-wing philosophers. Indeed, one of the major objectives of theorists in the “equality of what?” debate was to reformulate egalitarianism in such a way as to avoid the obvious objections to the simple-minded conception of equality of outcome that used to prevail in public debates (and that is represented nicely in the meme).

The ‘obvious objections’ mentioned above are things like people who have made poor choices in life. For example, intuitively, we don’t think that people who have made poor choices in life should be treated the same as those who have wound up with less because of circumstances beyond their control.

[Philosophers] took the choice/circumstance distinction and turned it into the fundamental justification for egalitarianism, arguing that our most basic reason for caring about equality is our desire to neutralize the effects of bad luck. According to this view, when we look at the kids on boxes meme and agree to take the box away from the tall guy and give it to the short kid, the reason we make this judgment is because height is an unchosen characteristic – it’s not the short kid’s fault that he’s short. The idea is not that everyone should get exactly the same outcome, but that we should not be allowing unchosen differences between persons to determine outcomes.

Framed like that, DEI would apply across the board, to people who face inequality through no fault of their own. It’s a shame that we took a meme-based approach to policy rather than a philosophical one. But then, we live in 2025 where only a small proportion of people are willing to take a nuanced view.

You don’t have to agree with this idea to see that it represents a very different way of thinking about equality. And from this perspective, the problem with the meme is that it dredges up an old, discredited view of equality, that can easily be undermined just by pointing to cases where individuals wind up with less because of choices they have made. A lot of the excitement generated by luck egalitarianism was based on the perception that we had overcome a significant error in thinking about equality, and could now move on to discussion of more defensible conceptions. And yet all it took was a single meme to turn back the clock by 50 years!

Source: In Due Course

Image: Modified from an original used in the above blog post.

I’m 100% positive people are going to talk to their cars

A chart ranks the top 10 generative AI use cases for 2024 and 2025, highlighting themes like personal development and technical assistance, with generating ideas and therapy/companionship at the top.

We live in the midst of a loneliness epidemic, especially for men. A recent Harvard Business Review article showed the difference between what people said they were using AI for this year, and compared to 2025.

“Generating ideas” has gone from first to sixth place, and “Therapy/companionship” has moved from second to first place. “Finding purpose” is a new use case coming straight in at third. There’s a paywall on the HBR article, so you can find the report here. Note that this was, in the words of the author, Marc Zao-Sanders, “a rigorous, expert-driven curation of public discourse, sourced primarily from Reddit forums.” No methodology is provided.

That being said, I’m using the report by way of introduction to the following extract from an article by Jay Springett, who reckons soon everybody will be talking to their car. I mean, I already talk to my Polestart 2 as it has Google Assistant built in, but he means talking in a deep and meaningful way.

For me, this is a case of not if, but when. It’s going to challenge notions of privacy, but also intimacy, infidelity, and loss (when providers inevitably shut down a service).

Consider the average American commuter: 60 minutes a day, mostly alone, in the car. The vehicle as liminal space. Neither home nor work. Private and intimate. I’m 100% positive people are going to talk to their cars. First for fun. Then for directions. Then about their lives. Their feelings. Their grief, their divorce.

And now that OpenAI has also introduced Memory (at least in the US) the car might remember everything you’ve ever told it. 😬

There’s a meaning crisis going on, which means there is a gaping emotional void waiting to be filled by a good listener that’s found in the safety of a car. Some people, especially men, already love their cars. What happens when the car appears to care for them back?

Her becomes a lot more plausible when the AI you fall in love with is also a car.

Source: thejaymo

Image: (shared by various people on LinkedIn)

End times fascism is a darkly festive fatalism

It’s long, so I’ve provided a proportionally-long excerpt, but it really is worth taking the time to read this article by Naomi Klein and Astra Taylor about “end times fascism.” It’s a worldview that is simultaneously conspiratorial, eschatological, and profoundly anti-democratic.

I’m glad I don’t live in the USA at the moment, but I think we’re kidding ourselves if we don’t think that this kind of worldview isn’t aiming to capture a country like the UK soon. Carving up the world into multiple oligarchies suits rich people just fine. And the world can burn so long as they have air conditioning and a rocket ride outta here.

Inspired by a warped reading of the political philosopher Albert Hirschman, figures including Goff, Thiel and the investor and writer Balaji Srinivasan have been championing what they call “exit” – the principle that those with means have the right to walk away from the obligations of citizenship, especially taxes and burdensome regulation. Retooling and rebranding the old ambitions and privileges of empires, they dream of splintering governments and carving up the world into hyper-capitalist, democracy-free havens under the sole control of the supremely wealthy, protected by private mercenaries, serviced by AI robots and financed by cryptocurrencies.

[…]

The startup country contingent is clearly foreseeing a future marked by shocks, scarcity and collapse. Their high-tech private domains are essentially fortressed escape pods, designed for the select few to take advantage of every possible luxury and opportunity for human optimization, giving them and their children an edge in an increasingly barbarous future. To put it bluntly, the most powerful people in the world are preparing for the end of the world, an end they themselves are frenetically accelerating.

[…]

Our opponents know full well that we are entering an age of emergency, but have responded by embracing lethal yet self-serving delusions. Having bought into various apartheid fantasies of bunkered safety, they are choosing to let the Earth burn

[…]

Listen to Steve Bannon’s daily podcast – which bills itself as Maga’s premier media outlet – and you will be barraged with a singular message: the world is going to hell, the infidels are breaching the barricades, and a final battle is coming. Be prepared. The prepper message becomes particularly pronounced when Bannon switches to hawking his advertisers’ products. Buy Birch Gold, Bannon tells his audience, because the over-leveraged US economy is going to crash and you can’t trust the banks. Stock up on ready-to-eat meals from My Patriot Supply. Sharpen your target practice using a laser-guided at-home system. The last thing you would want to do is depend on the government during a disaster, he reminds listeners (left unsaid: especially now that the Doge boys are selling off the government for parts).

Bannon doesn’t only urge his audience to make their own bunkers, of course. He also advances a vision of the United States as a bunker in its own right, one in which Ice agents stalk the streets, workplaces and campuses, disappearing those deemed enemies of US policy and interests. The bunkered nation lies at the heart of the Maga agenda, and of end times fascism. Inside its logic, the first job is to harden national borders and expunge all enemies, foreign and domestic.

[…]

As fascism always does, today’s Armageddon complex crosses class lines, bonding billionaires to the Maga base. Thanks to decades of deepening economic stresses, alongside ceaseless and skillful messaging pitting workers against one another, a great many people understandably feel unable to protect themselves from the disintegration that surrounds them (no matter how many months of ready-to-eat meals they buy). But there are emotional compensations on offer: you can cheer the end of affirmative action and DEI, glorify mass deportation, enjoy the denial of gender-affirming care to trans people, villainize educators and health workers who think they know better than you, and applaud the demise of economic and environmental regulations as a way to own the libs. End times fascism is a darkly festive fatalism – a final refuge for those who find it easier to celebrate destruction than imagine living without supremacy.

[…]

Three recent material developments have accelerated end times fascism’s apocalyptic appeal. The first is the climate crisis. While some high-profile figures might still publicly deny or minimize the threat, global elites, whose ocean-front properties and datacenters are intensely vulnerable to rising temperatures and sea levels, are well-versed in the ramifying perils of an ever-heating world. The second is Covid-19: epidemiological models had long predicted the possibility of a pandemic devastating our globally networked world; the actual arrival of one was taken by many powerful people as a sign that we have officially arrived at what US military analysts forecasted as “the Age of Consequences”. No more predictions, it’s going down. The third factor is the rapid advancement and adoption of AI, a set of technologies that have long been associated with sci-fi terrors about machines turning on their makers with ruthless efficiency – fears expressed most forcefully by the same people who are developing these technologies. All of these existential crises are layered on top of escalating tensions between nuclear-armed powers.

So, er, what do we do about all this?

First, we help each other face the depth of the depravity that has gripped the hard right in all of our countries. To move forward with focus, we must first understand this simple fact: we are up against an ideology that has given up not only on the premise and promise of liberal democracy but on the livability of our shared world – on its beauty, on its people, on our children, on other species. The forces we are up against have made peace with mass death. They are treasonous to this world and its human and non-human inhabitants.

Second, we counter their apocalyptic narratives with a far better story about how to survive the hard times ahead without leaving anyone behind. A story capable of draining end times fascism of its gothic power and galvanizing a movement ready to put it all on the line for our collective survival. A story not of end times, but of better times; not of separation and supremacy, but of interdependence and belonging; not of escaping, but staying put and staying faithful to the troubled earthly reality in which we are enmeshed and bound.>

We have reached a choice point, not about whether we are facing apocalypse but what form it will take. […]

To have a hope of combating the end times fascists […] we will need to build an unruly open-hearted movement of the Earth-loving faithful: faithful to this planet, its people, its creatures and to the possibility of a livable future for us all. Faithful to here.

Source: The Guardian

Image: Arctic Qu

Nobody should have to pay to be safe while using a computer

Who uses Tails - activists, journalists and their sources, domestic violence survivors, you

Yesterday, as happens on a regular basis, there was an update to Tails, “the amnesiac incognito live system.” I mention this because much of our digital life is online, and many of the systems we use are not only hostile to users, but are backed by organisations with links to authoritarian regimes and surveillance capitalism.

If I was travelling to China, the US, or Russia, or indeed was a citizen of countries with authoritarian tendencies, this would be what I’d be using to cover my back. As Cardinal Richelieu famously said, “Give me six lines written by the most honest man in the world, and I will find enough in them to hang him." These days, our digital footprint gives people with a grudge, an axe to grind, or a particular agenda, _much_more than “six lines”. Protect yourself proactively.

To use Tails, shut down the computer and start on your Tails USB stick instead of starting on Windows, macOS, or Linux. You can temporarily turn your own computer into a secure machine. You can also stay safe while using the computer of somebody else.

Tails always starts from the same clean state and everything you do disappears automatically when you shut down Tails.

Tails includes a selection of applications to work on sensitive documents and communicate securely. All the applications are ready-to-use and are configured with safe defaults to prevent mistakes.

Everything you do on the Internet from Tails goes through the Tor network. Tor encrypts and anonymizes your connection by passing it through 3 relays. Relays are servers operated by different people and organizations around the world.

Tor prevents someone watching your Internet connection from learning what you are doing on the Internet. You can avoid censorship because it is impossible for a censor to know which websites you are visiting.

Tor also prevents the websites that you are visiting from learning where and who you are, unless you tell them. You can visit websites anonymously or change your identity. Online trackers and advertisers won’t be able to follow you around from one website to another anymore.

All the code of our software is public to allow independent security researchers to verify that Tails really works the way it should.

Nobody should have to pay to be safe while using a computer. That is why we are giving out Tails for free and try to make it easy to use by anybody. Tails is made by the Tor Project, a global nonprofit developing tools for online privacy and anonymity.

Source & image: Tails

AI Literacy without power analysis is just compliance training

Two people are illustrated in a warm, cartoon style, one on the left and one on the right. The person on the left, who has their back to the viewer, and is typing on a laptop which is sitting on a table. They are white, their hair is shoulder length and dark, and they are wearing a green t-shirt. The computer screen is dark with rows of coloured squares representing programming. The person on the right looks similar but their hair is now tied back in a pony tail, and they are wearing a white lab coat and safety goggles. They are reaching down to lift up an orange hazard label which is about the size of a book. The label is an orange square with a black exclamation mark in the middle. The person looks like they are being careful as they lift it.

I’m working on an AI Literacy project with the BBC at the moment. I haven’t given many details of this anywhere, as they need to socialise internally that the work is happening first. But I’m really enjoying getting my teeth back into the new literacies space.

For years now, I’ve included in my presentations the fact that when you define ‘literacy’ you’re making a power move. You’re either explicitly or implicitly saying what counts as “literate behaviour.”

That’s why I’m in agreement with James O’Hagan’s position in this article. It chimes the point I made earlier this week about it being a good thing that young people are using AI for their own ends. They need a space to push back at simple ‘compliance training’ on how to use tools, to develop critical AI Literacies (plural!)

There is a reason why the dominant models of AI literacy being promoted to schools feel so hollow. They focus on functionality, not freedom. They train students to use the tools, not challenge the systems. They offer guardrails, not agency.

In one of my Medium pieces, I argued that we are designing AI literacy to make education compliant, not smarter. That is still the case. What often gets labeled “AI education” is really just exposure — watching a tool work, seeing a demo, reading a definition. For example, I completed all five MagicSchool AI certification courses in under 15 minutes — without ever logging in or using the platform. That says more about the training than it does about the tool.

Very little of this equips students to intervene. To resist. To build differently.

We need AI literacy that makes students dangerous thinkers, not docile users.

[…]

Let students ask who funds the tools. Who sets the limits. Who benefits. Let them critique the platforms that shape their school day. Let them design alternatives rooted in their experiences. And let us stop pretending that integration is progress if the terms are dictated from the outside.

We can — and must — teach the technical. But we should not stop there. We need to lift the hood, yes. But we also need to ask why the engine was built in the first place, who it leaves behind, and where it refuses to go.

The way we talk about AI in education will shape the way we teach it. If we treat it like magic, we will mystify. If we treat it like software, we will standardize. But if we treat it like a political, social, and ethical terrain, we will start to give students the tools to navigate it — and challenge it.

Source: James O’Hagan

Image: Yasmin Dwiputri & Data Hazards Project

The world is a built environment

Disassembled fan, with parts neatly organised

Dan Sinker uses this blog post to discuss US politics and the systematic dismantling of important infrastructure. But I’m interested in the wider framing of understanding that you can take things apart, literally and figuratively. As Steve Jobs said, everything around you that you call life was made up by people that were no smarter than you.

Most everything I know, I know because I took something apart.

I mean that literally: I’ve cut and I’ve unscrewed and I’ve pried and I’ve desoldered to get inside electronics and appliances. And I mean it figuratively: I read the source code for web pages to build my own, I’ve deconstructed writing to make myself better at it, I’ve mapped entire audio stories with pen and paper to understand how to assemble them myself.

If I want to really understand something, I have to understand all the pieces that went into making it come together.

[…]

Everything can be taken apart, and every step of the process is an opportunity to learn.

The world is a built environment, and I think understanding how it was built is key to being able to truly live in it.

Source: Dan Sinker

Image: KAT

It will be increasingly difficult to preserve the illusion that any government could solve the problems of capitalism

Red and black image with the words NO GOVERNMENT CAN GIVE YOU FREEDOM in white

Adam Procter sent me this video which explains the ‘squeeze out’ that’s been happening over the last 30 years or so. It happens in five stages:

  1. The rich start to accumulate more money, as they are not taxed enough. They buy up assets, out-competing the working classes for resources, and driving them into debt.
  2. The working classes have run out of money and cannot borrow or spend any more, so there is an economic depression and a crisis. So the government has to step in.
  3. The government starts to run out of resources as well, so borrows from (or enters into public-private partnerships with) the rich.
  4. The government has no choice but to slowly eviscerate the middle classes. Eventually there is no wealth left other than that held by the rich, meaning that the physical structure of society changes so that it only supports consumption by them.
  5. There is no-one left to squeeze. The rich own everything, and the only way they can try and grow their wealth is by sending people to fight in wars against each other.

Like Cory Doctorow’s three stages of enshittification it’s a useful overview of a process that would otherwise be difficult to pin down. Is it 100% accurate everywhere and all of the time? No, probably not, but it’s a useful framing.

Given the absolute destruction of the world and dismantling of civil society that’s happening at the moment, I’m a little bit less reticent to state that I’m an anarchist. Not like the ridiculous caricature of anarchists as terrorists and in books like The Man Who Was Thursday by G.K. Chesterton (which is otherwise an enjoyable novel). Nor am I out there actively fermenting trouble. But that broadly libertarian socialist angle is my starting position for understanding how the world should be.

If there was ever a time to be reading more revolutionary stuff, it’s now. So I’m pointing you towards Crimethinc. “a rebel alliance—a decentralized network pledged to anonymous collective action—a breakout from the prisons of our age.”

The future may hold neoliberal immiseration, nationalist enclaves, totalitarian command economies, or the anarchist abolition of property itself—it will probably include all of those—but it will be increasingly difficult to preserve the illusion that any government could solve the problems of capitalism for any but a privileged few. Fascists and other nationalists are eager to capitalize on this disillusionment to promote their own brands of exclusionary socialism; we should not smooth the way for them by legitimizing the idea that the state could serve working people if only it were properly administered.

[…]

Rather than seeking state power, we can open up spaces of autonomy, stripping legitimacy from the state and developing the capacity to meet our needs directly. Instead of dictatorships and armies, we can build worldwide rhizomatic networks to defend each other against anyone who wants to wield power over us. Rather than looking to new representatives to solve our problems, we can create grassroots associations based in voluntary cooperation and mutual aid. In place of state-managed economies, we can establish new commons on a horizontal basis. This is the anarchist alternative, which could have succeeded in Spain in the 1930s had it not been stomped out by Franco on one side and Stalin on the other.

[…]

As the crises of our era intensify, new revolutionary struggles are bound to break out. Anarchism is the only proposition for revolutionary change that has not sullied itself in a sea of blood. It’s up to us to update it for the new millennium, lest we all be condemned to repeat the past.

Source: Crimethinc.

That's a rather laughable fine, frankly

Abstract blue and white image

A couple of months ago, a report that Laura and I wrote for Friends of the Earth was published. Focusing on AI and environmental justice, in collaboration with experts and campaigners, we came up with seven principles:

  1. Curiosity around AI creates opportunities for better choices.
  2. Transparency around usage, data and algorithms builds trust.
  3. Holding tech companies and governments accountable leads to responsible action.
  4. Including diverse voices strengthens decision making around AI.
  5. Sustainability in AI systems helps reduce environmental impact and protect natural ecosystems.
  6. Community collaboration in AI is key to planetary resilience.
  7. Advocating with an intersectional approach supports humane AI.

That third point is really important, and as this article shows, merely fining tech companies isn’t enough.

Apparently, Elon Musk’s company xAI is using methane-burning gas turbines to fuel over a data centre at a site in Tennessee. As the generators are classed as ‘portable’ they can be used for up to 364 days. At the time of writing, xAI only has permits for the use of 15 generators, but they’re currently using 35. That’s terrible for the environment.

The proximity of powerful tech companies to right-wing authoritarianism is one of the reasons we ended up with the Holocaust. In a recent newsletter, Audrey Watters pointed to this article where the authors introduce the TESCREAL bundle (“transhumanism, Extropianism, singularitarianism, (modern) cosmism, Rationalism, Effective Altruism, and longtermism”). They consider these beliefs to be “direct descendants of first-wave eugenics.” Plenty of concepts to look up there, but the TL;DR is that “much of the billionaire funding for projects focused on AGI comes from wealthy individuals aligned or explicitly affiliated with one or more of these ideologies.”

Last year it turned out that Elon Musk’s xAI had to install additional ‘portable’ generators near its facility adjacent to Memphis, Tennessee, to power the Colossus supercomputer with over 100,000 Nvidia H100 GPUs as local power grid could not support the load. The Southern Environmental Law Center contends the generators are “illegal,” yet they can keep running, reports The Guardian.

[…]

The law center stated in a letter that these generators are a major pollution source and breach federal air quality rules, including emissions of hazardous and cancer-causing substances. They demanded that the local health agency issue an emergency halt to the operations and fine the company $25,000 for every day it continues to run them without proper authorization.

That’s a rather laughable fine, frankly. The 100,000 H100 GPUs in the xAI Colossus would cost about $2.5 billion on their own — never mind the rest of the data center infrastructure and hardware. $25,000 per day would amount to just $9.1 million per year. Providing 150MW of electricity, 24/7, on the other hand, even at a price of $0.05 per kWh (we’re not sure what xAI pays to run the portable generators) would be about $180,000 per day.

Source: Tom’s Hardware

Image: Logan Voss

It’s incredibly hard to politely reply whilst still walking briskly

Someone running outside from a distance

I enjoyed reading this blog post from Simon Wolf, who, in his fifties, has decided to change his lifestyle and become fitter. He seems to have been prompted by thoughts about his own mortality, and discovering the YouTuber Ryan Condon who halved his weight from 190kg to 95kg by walking, running, and cycling.

Simon talks about three things: the gear required (get decent running shoes!), the technology he uses to track his exercise, but — perhaps most importantly — the mental side of things. That’s not just “getting out there” and having the motivation to do something, but getting over ourselves thinking that people are paying more attention to us than they actually are.

Having been active most of my life, it’s only very recently that I’ve struggled with this. My recent heart condition has meant that I’ve only this week run outside again for the first time in about three months. Of course, I’m running a lot slower than I used to, and virtually walking up hills, which made me a bit self-conscious.

But of course nobody cares. I’m just another middle-aged guy moving past people’s fleeting consciousness. Even if they do recognise me, at least I’m out there, trying.

As someone who is new to exercise and who is very self-aware about their lack of fitness and ability, just stepping out of my front door to go and do the first session was intimidating. What if people laugh at me? What if I can’t do any of it and give up after a few minutes? What if I can’t run at all? What happens if I see someone I know? The doubts go on and on. But I had picked a day, the 31st of March, and I was determined to stick to it. Worse case, I could pretend that I was out for a walk in some slightly unusual clothing for me.

So at around 6pm I walked up to the recreation field in the village where I live. The warm-up is a walk so this was fine. And the field was largely empty apart from a few young children playing and a joyous sight… someone doing some intermittent walking and running which was very possibly someone else doing Couch to 5K. Suddenly it all felt a bit more possible. But even without that I was remembering the words that “doing something is better than doing nothing” and I was indeed doing something and I felt empowered by it.

In the four sessions I have done since then I have had two where the field was essentially empty, one where some ladies walking their dogs asked me what I was doing (it’s incredibly hard to politely reply whilst still walking briskly), and one where the field was part-filled with a kids football club and, inevitably, their parents all watching. But I walked and ran regardless and nobody shouted anything at me and after a little while I zoned them all out and just got on with what I was doing. I survived and it didn’t put me off doing it all again. In fact, this week I was a bit sad that I’d done my run a little earlier and missed my, presumably appreciative, audience.

Source: Simon Wolf’s Blog

Image: Marcel Ardivan

800 m² of communal space are hidden behind the facades of reclaimed wood

Communal area at the bottom of De Warren's Macchu Picchu stairway

This week I attended a STEPS Collective monthly meetup where the topic was Money. It was a really well-run session by Esther Hayes Grossman and included the first example I’ve seen of people silently voting on a topic by turning on their video in response to a prompt.

During the discussion, one participant shared details of ‘De Warren’ in Amsterdam as an example of something that “makes no sense” to real estate people because there’s no profit motive involved. It made me think how interesting it would be to live in such a space — check out that ‘Macchu Picchu’ stairway!

Having lived on a small row of terraced houses for nine years prior to moving to where we currently live, what I miss are the serendipitous moments of bumping into neighbours and having a chat in the shared back lane. I think we all need more of that in our lives, to build solidarity. It’s all well and good thinking about online social spaces, but we are embodied, social creatures.

At the De Warren newbuild by a housing cooperative on the outskirts of Amsterdam, 36 affordable rental apartments and about 800 m² of communal space are hidden behind the facades of reclaimed wood.

[…]

The architects determined the building’s spatial programme with the members of the cooperative in four workshops. Thirty percent of the space in the house – i.e. about 800 m² – is set aside for communal rooms distributed on all storeys.

[…]

The building’s collage-like facades differ it quite considerably from its neighbours. The outer cladding in reclaimed wood is just the most visible characteristic of a comprehensive sustainability concept. This includes 30-m ground piles to which piping has been added, meaning they thus serve as geothermal heat exchangers for the heat pump that supplies the house with heat. The electricity for the heating is provided by a photovoltaic panel array on the roof. Altogether the building has an EPC rating of 0,16 according to Dutch energy regulations and is thus “energy-positive”.

The glazed lounge at the corner of the building marks the start of a continual staircase – called the Macchu Picchu stairway by the architects – that connects the numerous communal spaces scattered through the house, such as the children’s playroom, a music studio, several co-working offices, a meditation room, a shared roof terrace with greenhouses and several communal kitchens.

Source: DETAIL

Thought Shrapnel podcast: Episode #000

Thought Shrapnel episode promo card

While the rest of Team Belshaw was doing such novel things as socialising, working, and watching TV on Friday night I decided to break out my microphone and record a solo podcast episode. Weighing in at around 20 minutes, I couldn’t in all honesty call it a ‘microcast’, so I ended up publishing using Spotify’s creator tools.

Yes, there’s an RSS feed. Is this something I want to do regularly? Is it something people want to hear? I don’t know. Perhaps I need a CONTENT STRATEGY. (I do not need a content strategy.)

Spotify link

We are absolutely cooked

Someone shared a link to this Instagram video in which the person on the video claims:

A friend’s daughter fed her mom’s voice to AI and then used it to get out of school to hang with her friends. She also used it to have her friends sleepover. We are absolutely cooked.

I applaud this novel use of technology by the girl. It’s not much different to my son trying to forge my signature so that I didn’t find out about his detention.

Or, indeed, me using my dad’s credit card in 1996 when I wasn’t allowed on the internet. I started sequential month-long trials with Compuserve and AOL, going to the phone box at the end of the street to call the company, pretend to be my dad, and cancel the accounts.

Yes, I get that all of this AI stuff makes it easier to scam people, but then technology has always been an arms race. Knowing how to look for the signs of what’s real and what’s fake is, therefore, a part of AI Literacy.

The rapture is not something we wait for. It's something we do.​

Great stuff from Dan Meyer here. Even if he is representing an edtech company, he’s got more integrity in his little fingernail than many vendors.

I’m telling you, everybody, the problems stay the same. Every year, every decade, every new technology, the problems stay the same.

Figure out how to get along. Figure out how to share the surplus of what we build. Figure out who needs what and why they don’t have it. Figure out how to help people see that we see their value. No technology will ever change any of these unalterable challenges of human existence.

And I get it, they’re hard. It’s work. It’s also life. It’s the work of a lifetime. And I get why many people want a technological cheat code, an easy button, a rapture. But none’s coming. We’re stuck with us, people, the cause and solution of all of our problems. And new technologies can play a part, but only if we start with what people need. Teachers and students need content, sure, but especially connection.

The rapture is not something we wait for. It’s something we do.

Source: YouTube

What IPAs do you guys have on draft?

A man is sitting on a couch with his feet up on a coffee table, using a laptop, beside a tall floor lamp and a television on the wall.

I’m a Xennial who identifies much more with Millennial culture than with Gen X. In his new column for VICE, Drew Austin (of Kneeling Bus fame) talks about the end of Millennial culture coinciding with us coming out blinking from pandemic-induced lockdowns.

It’s a fair point. I’m 44 and the youngest Millennials are around 30 years old. Popular culture belongs to people in their teens and twenties, mostly, which means that what we thought was cool and hip is now old and stale. I quite enjoyed reading this on my sticker-covered laptop while listening to music from the early 2000s on my iPod HiFi. One could say I’m comfortably settling into the second half of my life.

The monuments that have endured also attest to the generation’s decline. The electric scooter boom of the late 2010s—arguably the millennials’ swan song, and an exemplary symbol of their distinctive culture—produced a strange but predictable side effect: piles of discarded and destroyed Bird and Lime scooters littering embankments and ponds and other marginal urban spaces. The literal trashing of these whimsical avatars of the millennial economy, documented in an Instagram account called Bird Graveyard, was also a fitting metaphor for the eventual state of so many other millennial artifacts: expired but still visible, scattered throughout the urban environment, persistent reminders of an embarrassing recent past. Today, these proverbial junk piles contain more than just scooters, but also IPAs, escape rooms, listicles, smash burgers, Garden State, MySpace, brunch, @shitmydadsays, tight jeans, sans serif fonts, life hacks, axe throwing bars, Williamsburg, speakeasies, Urban Outfitters, electroclash, fast casual bowls, food trucks, food delivery apps, ridesharing apps, laundry apps—apps for every conceivable action—and even 44th US President Barack Obama himself. Much of this remains permanently embedded in the landscape. No longer fresh, it’s now just the mundane infrastructure of everyday life. What IPAs do you guys have on draft?

[…]

Every month, it seems, there’s a new thinkpiece about how millennials are washed, usually written by millennials themselves (including this one, I suppose)—but the generation seems unconvinced by its own self-deprecating argument. “Millennials’ ability to drive a cycle of discourse around our age means we can still shape the conversation,” Bernstein writes. “For millennials who criticized their boomer parents for decades for not shuffling off the stage, the ‘look how old we are’ act may serve another purpose: prolonging our own time in the spotlight, and our own sense that we are the protagonists of history.”

This kind of navel-gazing, of course, has always been a millennial hallmark. Millennials invented social media and were immediately its most dedicated users, becoming the first generation who could expect their own audience regardless of how exceptional they were, and the first to enjoy a forum where they could process their neuroses and insecurities in public. One could hardly expect millennials to bow out gracefully after 20 years of such preening online; talking is what they do best, and it’s becoming clear they’ll still be doing it when no one else is listening.

[…]

One of the emergent qualities of the digital culture millennials shaped is that nothing ends any more. Wars and pandemics drag on; aging bands keep touring in a perpetual state of reunion rather than breaking up; politicians circle the drain into their eighties and nineties; bygone aesthetics and styles are forgotten and rediscovered in shorter and shorter cycles. We seem unable to fully metabolize experiences and move on, for better or worse; we suffer from cultural acid reflux.

The paradox of the internet is that it enables this endlessness while also making culture less durable and more disposable. Millennials, again, were the first generation to bank a large share of their cultural capital online, which now seems to guarantee its swift erasure. As the generation’s Obama-era heyday recedes farther into the past, its most significant accomplishments feel increasingly elusive, hazy, out of reach, or just illegible, revealing the digital ground it all stood upon to be an unstable foundation. The rewards for millennials’ technological adventurousness have been obvious—wealth, attention, convenience, abundance of all kinds—with the drawbacks mostly becoming evident only later. And one of these drawbacks is ephemerality: The millennials’ curse is to have built their castles on sand, to see their contributions begin fading as quickly as they once appeared, to leave no lasting proof of their erstwhile relevance. The cultural significance that was attainable in the 20th century has itself become a casualty of the internet. All those moments lost in time, like tears in rain.

Source: VICE

Image: Kenny Eliason

Three clear predictors of impatience

A person appears in a state of distress with hands on their head, surrounded by a blurred, translucent effect.

As I’ve long suspected, researchers have found evidence that patience is not a virtue, but rather a coping strategy.

TL;DR: you’re more likely to be impatient when stuck in a particularly unpleasant state, when you want to reach an intended goal, and when someone is clearly to blame for the frustration.

So now you know.

Each hypothetical situation came in two versions, with one designed to provoke high levels of impatience, and the other only low levels. In one story, for example, the participant was asked to imagine that they were watching a film in a cinema and a child nearby was being noisy. In the ‘low impatience’ version of this scenario, the parents were doing everything they could to calm the child, while in the other, they were described as doing nothing. In addition to this, participants also completed a range of questionnaires, including a personality test and a measure of their ability to regulate their emotions.

When the team analysed the resulting data, they found three clear predictors of impatience. Participants said that they would feel more impatient when they were stuck in a particularly unpleasant state (waiting for an appointment without a seat, for example); when they particularly wanted to reach their intended goal (when they were on their way to a concert by a band they really wanted to see but were stuck in traffic); and, finally, when someone was clearly to blame for the frustration (in the cinema example, this was when the parents were described as ignoring their noisy child).

These three situation characteristics consistently provoked impatience across different scenarios, the team reports. In the third study, which also asked participants to rate the objectionableness of the situation, they found that those that had any of those three characteristics also got higher objectionableness ratings. Together, these results provide “tentative evidence” the emotion of impatience is prompted by perceiving one of these three characteristics, they write.

However, when the researchers analysed the data on how patient the participants thought they would be in the various scenarios, they found that, in general, these results were linked less to the specific situation and more to variations in individual factors. Specifically, better scores on the measures of impulsivity, emotional awareness and flexibility, and also the personality trait of agreeableness were all linked to higher patience scores.

Source: The British Psychological Society

Image: Uday Mittal

No one is actually dead until the ripples they cause in the world die away

Two green leaves float on water with ripples, surrounded by reflections of trees.

I wouldn’t usually feature one of my own posts on Thought Shrapnel but in this case there’s a couple of good reasons. First, you may not have ever listened to Today In Digital Education (TIDE), a podcast I recorded with my good friend Dai Barnes between 2015 and his tragic passing in 2019. I think you’d enjoy it.

Second, though, you may have been a listener, and somehow still have the audio files for episodes 26 and/or 37. I’m not sure how, but they currently seem lost to the sands of time. If you do have them, could you let me know? I’d love to create a complete archive.

This post is to memorialise and provide an archive for Today In Digital Education (TIDE), a podcast I recorded with my good friend Dai Barnes between 2015 and 2019. It was the second podcast I co-hosted with him, the first being EdTechRoundUp from 2007 to 2011.

[…]

Dai sadly passed away suddenly in his sleep in early August 2019. In the eulogy I gave for Dai at Oundle School, I quoted Terry Pratchett as saying that “No one is actually dead until the ripples they cause in the world die away.” I still miss Dai and know his ripples continue on through many of us.

Source: Open Thinkering

Image: Snappy Shutters

The cost of inaction is higher than the cost of transformation and adaptation

Protesters hold a large banner that reads 'OUR HOUSE IS ON FIRE' with an illustration of a burning Earth, amid a crowd in an urban setting.

The headline that The Guardian chose to use for this article is “Climate crisis on track to destroy capitalism, warns top insurer.” I mean, if only.

But, of course, the reality is entirely the opposite way around: capitalism is destroying the climate. The only thing that provides some solace in this article is the realisation that people, organisations, and governments will be unable to get insurance, which will in turn put (positive) pressure on Net Zero targets.

At least, I hope that will be the case. Otherwise we’re going to have a Mad Max-style world on fire with lots of poverty and migration.

The world is fast approaching temperature levels where insurers will no longer be able to offer cover for many climate risks, said Günther Thallinger, on the board of Allianz SE, one of the world’s biggest insurance companies. He said that without insurance, which is already being pulled in some places, many other financial services become unviable, from mortgages to investments.

Global carbon emissions are still rising and current policies will result in a rise in global temperature between 2.2C and 3.4C above pre-industrial levels. The damage at 3C will be so great that governments will be unable to provide financial bailouts and it will be impossible to adapt to many climate impacts, said Thallinger, who is also the chair of the German company’s investment board and was previously CEO of Allianz Investment Management.

The core business of the insurance industry is risk management and it has long taken the dangers of global heating very seriously. In recent reports, Aviva said extreme weather damages for the decade to 2023 hit $2tn, while GallagherRE said the figure was $400bn in 2024. Zurich said it was “essential” to hit net zero by 2050.

[…]

No governments will realistically be able to cover the damage when multiple high-cost events happen in rapid succession, as climate models predict, Thallinger [on the board of Allianz SE, one of the world’s biggest insurance companies] said. Australia’s disaster recovery spending has already increased sevenfold between 2017 and 2023, he noted.

[…]

Many financial institutions have moved away from climate action after the election of the US president, Donald Trump, who has called such action a “green scam”. Thallinger said in February: “The cost of inaction is higher than the cost of transformation and adaptation. If we succeed in our transition, we will enjoy a more efficient, competitive economy [and] a higher quality of life.”

Source: The Guardian

Image: Mika Baumeister

The Great Democratization Cycle

A sleek laptop on a modern desk displays colorful abstract art on its screen, surrounded by speakers, a mug, and a plant.

This article by Pete Sena is a curious one. On the one hand, he makes some really solid points about ‘vibe coding’ which I’d define as using natural language to create digital artefacts containing code. Most commonly these are web apps, such as the ones I’ve created:

  • Album Shelf — make virtual shelves of music albums to set as your video conference background
  • Badge to the Future — a Verifiable Credentials issuing and portfolio platform
  • Career Discovery Tool — a question-based tool using the Perplexity and Lightcast APIs to find which jobs might be suitable (and less likely to be automated)

Sena goes on, however, to start talking about ‘craft’, as if somehow lots of people being able to manipulate code is going to destroy the industry. It won’t. There will just a lot more people being able to go from idea to execution quickly.

Does that mean that every vibe coded app will be scalable and secure? Obviously not. But this is the worst the technology is going to be, so buckle up, folks! If you’re interested in a potential course I’m going to offer around all this, you can sign up at vibe.horse.

Vibe coding is also just one more example of what I call the Great Democratization Cycle. We’ve seen it in photography as it evolved from darkrooms to digital cameras, which eliminated film processing, to smartphones and Instagram filters, making everyone a high-end “photographer.” The same goes for publishing (from printing presses to WordPress), video production (from studio equipment to TikTok), and music creation (from recording studios to GarageBand on a laptop and now AI tools like Suno on your smartphone).

[…]

This AI-driven accessibility is undeniably powerful. Designers can prototype without developer dependencies. Domain experts can build tools to solve specific problems without learning Python. Entrepreneurs can validate concepts without hiring engineering teams.

But as we embrace this new paradigm, we face a profound question: What happens when we separate makers from their materials?

[…]

Consider this parallel: Would we celebrate a world where painters never touch paint, sculptors never feel clay, or chefs never taste their ingredients? Would their art, their craft, retain its soul?

When we remove the intimate connection between creator and medium — in this case, between developer and code — we risk losing something essential: the craft.

[…]

True innovation often emerges from constraints and deep domain knowledge. When you wrestle with a programming language’s limitations, you’re forced to think creatively within boundaries. This tension produces novel solutions and unexpected breakthroughs.

When we remove this friction entirely, we risk homogenizing our solutions. If everyone asks AI for “a responsive e-commerce site with product filtering,” we’ll get variations on the same theme — technically correct but creatively bankrupt implementations that feel eerily similar.

Source: Peter Suna

Image: BoliviaInteligente

I warned that LLMs would be used for dumb things that would affect lots of people

A Twitter user comments on AI models providing similar tariff policy formulas as those published by the White House, displayed alongside a screenshot of a detailed AI-generated comparison chart.

I’m a daily, but not uncritical, user of generative AI. One of the particularly problematic uses of the technology is of an objective, neutral, and all-knowing arbiter for decision making.

It’s bad enough doing this on a local level, when not many people are involved. It’s much worse when brought in, say, to the benefits system and of course, much much worse when used (allegedly) to dictate punitive tariffs.

In Taming Silicon Valley, I warned that LLMs would be used for dumb things that would affect lots of people.

I rest my case.

Source & screenshot: Marcus on AI

To cope, the brain improvises

A gradient background transitions smoothly from blue to pink.

My wife’s favourite colour is purple. Which, doesn’t really exist — it’s a nonspectral colour. But then, strictly speaking, no colours exist. Phenomenology FTW.

Our eyes can’t see most wavelengths, such as the microwaves used to cook food or the ultraviolet light that can burn our skin when we don’t wear sunscreen. We can directly see only a teeny, tiny sliver of the spectrum — just 0.0035 percent! This slice is known as the visible-light spectrum. It spans wavelengths between roughly 350 and 700 nanometers.

[…]

Although violet is in the visible spectrum, purple is not. Indeed, violet and purple are not the same color. They look similar, but the way our brain perceives them is very different.

[…]

When light enters our eyes, the specific combination of cones it activates is like a code. Our brain deciphers that code and then translates it into a color.

Consider light that stimulates long- and mid-wavelength cones but few, if any, short-wavelength cones. Our brain interprets this as orange. When light triggers mostly short-wavelength cones, we see blue or violet. A combination of mid- and short-wavelength cones looks green. Any color within the visible rainbow can be created by a single wavelength of light stimulating a specific combination of cones.

[…]

In the middle of the rainbow — colors like green and yellow — the mid-wavelength cones are busiest, with help from both long- and short-wavelength cones. At the blue end of the spectrum, short-wavelength cones do most of the work.

But there is no color on the spectrum that’s created by combining long- and short-wavelength cones.

[…] Purple is a mix of red (long) and blue (short) wavelengths. Seeing something that’s purple… stimulates both short- and long-wavelength cones. This confuses the brain. […]

To cope, the brain improvises. It takes the visible spectrum — usually a straight line — and bends it into a circle. This puts blue and red next to each other.

[…]

Colors that are part of the visible spectrum are known as spectral colors. It only takes one wavelength of light for our brain to perceive shades of each color. Purple, however, is a nonspectral color. That means it’s made of two wavelengths of light (one long and one short).

Source: ScienceNewsExplores

Image: Luke Chesser

The future of the many diasporas which already characterize our present

A flag featuring a blue circle with a diagonal white stripe and yellow arrows is waving against a sky filled with flying birds.

I was all ready to summarise a post about an internet of many autonomous communities, but what really caught my eye was an article the author links to from 2017.

I’ve already referenced Episode #324 (“What’s Good for the Goose”) of Dan Carlin’s Common Sense podcast this week, and will do so again in relation to this. We’re entering a point in history where the assumptions made at the founding of nation states are being challenged by the digital technologies that allow instant communication between continents.

Ada Palmer is a novelist and historian, whose award-winning Terra Ignota series explores a future of borderless nations. If we stop and think for a moment, those who work from home and don’t have much of a geo-specific social life (🙋), can already choose to live quite differently to their neighbours. I can only seeing that becoming even more the case over time.

What if citizenship wasn’t something we’re born with, but something we choose when we grow up? In the Terra Ignota future, giant nations called “Hives” are equally distributed all around the world, so every house on a block, and even every person in a house, gets to choose which laws to live by, and which government represents that individual’s views the most. It’s an extension into the future of the many diasporas which already characterize our present, since increasingly easy transportation and communication mean that families, school friends, social groups, ethnic groups, language groups, and political parties are already more often spread over large areas than residing all together. In this future each of those groups can be part of one self-governing nation, with laws that fit their values, even while all living spread over the same space.

Source & image: The Reactor

We create more than ever, but it weighs nothing

A black X is marked on a surface with a yellow label that says 'HEAVY' placed over it.

I discovered this post by Dougald Hine via Warren Ellis, which in turn links to Anu’s exhortation to ‘make something heavy’.

The identification of people being ‘pre-heavy thing’ or ‘post-heavy thing’ is an interesting concept. Perhaps I need to think about my next heavy thing?

If something is heavy, we assume it matters. And often, it does. Weight signals quality, durability, presence, permanence.

[…]

We accept this in the physical world.

But online, we forget.

[…]

The modern makers’ machine does not want you to create heavy things. It runs on the internet—powered by social media, fueled by mass appeal, and addicted to speed. It thrives on spikes, scrolls, and screenshots. It resists weight and avoids friction. It does not care for patience, deliberation, or anything but production.

It doesn’t care what you create, only that you keep creating. Make more. Make faster. Make lighter. (Make slop if you have too.) Make something that can be consumed in a breath and discarded just as quickly. Heavy things take time. And here, time is a tax. And so, we oblige—everyone does. We create more than ever, but it weighs nothing.

[…]

Creation isn’t just about output. It’s a process of becoming. The best work shapes the maker as much as the audience. A founder builds a startup to prove they can. A writer wrestles an idea into clarity. You don’t just create heavy things. You become someone who can.

[…]

At any given time, you’re either pre–heavy thing or post–heavy thing. You’ve either made something weighty already, or you haven’t. Pre–heavy thing people are still searching, experimenting, iterating. Post–heavy thing people have crossed the threshold. They’ve made something substantial—something that commands respect, inspires others, and becomes a foundation to build on. And it shows. They move with confidence and calm. (But this feeling doesn’t always last forever.)

Source: Working Theorys

Image: Keagan Henman

The vaunted first amendment guaranteeing free speech has become a bitter and twisted joke

A young woman holding a sign reading 'Make empathy great again'

As I’ve seen other post about, there’s no easy way to calculate the impact and lost value of the research that won’t be done, the breakthroughs that won’t be made, and the collaborations that won’t happen as a result of the oligarchy currently taking over the US political system.

In this post, Prof. Christina Pagel gives just one small example of the self-censorship ‘just in case’ that will be happening everywhere. I didn’t travel to the US last year because it felt like an unsafe to visit; I sure as hell ain’t going this year.

Relatedly, I’d highly recommend listening to Episode #324 (“What’s Good for the Goose”) of Dan Carlin’s Common Sense podcast as it puts current events in a wider context.

A colleague and I would like to write an academic paper on the potential impact of US funding cuts to global health programmes. Our ideal co-author is an international expert newly based in the US, and they would like to do it. But we are all worried that doing so will expose them to the risk of having their academic visa cancelled, being detained and eventually deported - no matter how solid the science and how academic and dry our language. We are especially fearful because they are brown.

My colleagues who have been writing about the new administration, or the situation in Gaza, in academic journals, on substack or on social media are cancelling work trips to the US. I too would not feel safe to go now, given how openly I have criticised the administration. Even a 1% chance of being denied entry or shipped to a detention centre is too high.

When I said these words out loud to my husband today I had to stop for a moment to let it sink in. Foreign scientists in the US are scared to publish anything perceived as critical for fear of being bundled off the street to a detention centre. Foreign scientists abroad are scared to go to the US because they have voiced criticism of the state. The US is actively cracking down on perceived dissenters and foreigners are the most vulnerable to arbitrary detention and lack of due legal process. The vaunted first amendment guaranteeing free speech has become a bitter and twisted joke.

Source: Diving into Data & Decision making

Image: Floris Van Cauwelaert

The Ghibli crisis is just the beginning

Distracted boyfriend meme in Ghibli style

As I’ve argued many times, including just last week appending ‘literacy’ to a word is an attempt at control. It’s a power move, either intentionally or unintentionally. So, for example, with the work that I’ve been kicking off around AI Literacy recently with the BBC, it’s interesting to see the way that, for example, Big Tech wants to define it compared to, say, academics.

In this post, Jay Springett introduces the term ‘context literacy’ but doesn’t really define what it means. I don’t doubt that what he identifies in the post isn’t a set of important skills and competencies, but is it a ‘literacy’? Is it a way of metaphorically ‘reading’ and ‘writing’? Or is it just a way of understanding and making sense of the world?

I definitely agree that we’re in the midst of another culture war that, perhaps more than ever before, is predicated on a lack of shared context. I’ve started watching the Contrapoints video I shared recently about conspiracism, which I think is very closely related to this.

The Ghibli crisis is just the beginning. Focusing on the outputs alone misses the point.

So how do we respond?

We must recognise that revolution is not over. We are in the Information Age.

We must cultivate context literacy and we must maintain a distinction between the infrastructure and the experience, between machine and meaning.

We are living through a moment that future historians may describe as a cultural rupture. A context war. How this plays out will shape new definitions of truth, authorship, creativity, and trust, perhaps for centuries to come.

The question is not whether this will happen.

It already is.

Source: thejaymo

Image: Distracted boyfriend meme in Ghibli style

Organisations will need to change their analogies

A robotic hand and a human hand are reaching towards each other against a pink background.

Ethan Mollick reports on a study last summer with 776 professionals at Procter and Gamble. The findings, pretty obviously, show that working with AI boosts performance, but also that working in teams is just as effective as working with AI. Teams working with AI “were significantly more likely to produce… top-tier solutions.”

What’s interesting to me, though, is the emotional aspect of all this. Unless you’ve done work around, say, nonviolent communication and Sociocracy it’s likely to that you experience regular unprocessed negative emotions around work. Especially if you work in a hierarchical setting.

Generative AI can be particularly good at helping you think more objectively about work — as being something that you have ideas and thoughts about, rather than emotions. At least, it does for me. Note that I definitely think that you should bring your full self to work, it’s just that unhelpful negative emotions can sometimes creep in to our relationships with other humans, especially around the validation (or otherwise) of ideas.

For me, the sweet spot is working with people I know, respect, and trust (i.e. my colleagues at WAO) and using generative AI to augment our collaboration.

A particularly surprising finding was how AI affected the emotional experience of work. Technological change, and especially AI, has often been associated with reduced workplace satisfaction and increased stress. But our results showed the opposite, at least in this case.

People using AI reported significantly higher levels of positive emotions (excitement, energy, and enthusiasm) compared to those working without AI. They also reported lower levels of negative emotions like anxiety and frustration. Individuals working with AI had emotional experiences comparable to or better than those working in human teams.

While we conducted a thorough study that involved a pre-registered randomized controlled trial, there are always caveats to these sorts of studies. For example, it is possible that larger teams would show very different results when working with AI, or that working with AI for longer projects may impact its value. It is also possible that our results represent a lower bound: all of these experiments were conducted with GPT-4 or GPT-4o, less capable models than what are available today; the participants did not have a lot of prompting experience so they may not have gotten as much benefit; and chatbots are not really built for teamwork. There is a lot more detail on all of this in the paper, but limitations aside, the bigger question might be: why does this all matter?

[…]

To successfully use AI, organizations will need to change their analogies. Our findings suggest AI sometimes functions more like a teammate than a tool. While not human, it replicates core benefits of teamwork—improved performance, expertise sharing, and positive emotional experiences. This teammate perspective should make organizations think differently about AI. It suggests a need to reconsider team structures, training programs, and even traditional boundaries between specialties. At least with the current set of AI tools, AI augments human capabilities. It democratizes expertise as well, enabling more employees to contribute meaningfully to specialized tasks and potentially opening new career pathways.

The most exciting implication may be that AI doesn’t just automate existing tasks, it changes how we can think about work itself. The future of work isn’t just about individuals adapting to AI, it’s about organizations reimagining the fundamental nature of teamwork and management structures themselves. And that’s a challenge that will require not just technological solutions, but new organizational thinking.

Source: One Useful Thing

Image: Igor Omilaev

Once you become aware of Hyperlegibility, you see it everywhere

Illegible sign

I think the author of this article, Packy McCormick, has essentially discovered “working openly.” But, I guess, the twist is that it’s doing so in a way that makes your ideas accessible and understandable to as many people as possible.

It’s interesting doing this in an age of AI, because (as McCormick) does, you can half-remember something, type it into an LLM, and have it find the thing you’re talking about, with sources, extremely quickly.

If ‘hyperlexic’ describes extraordinary reading ability, then let me propose a complementary word for extraordinary readability: Hyperlegible.

Hyperlegibility defines our current era so comprehensively that I was shocked when I googled the term and found only references to fonts.

[…]

Hyperlegibility emerges with game theoretical certainty from each of our desire to win whatever game it is you’re playing. Certainly, it’s a consequence of playing The Great Online Game. In order for the right people and projects to find you, you must make yourself legible to them. To stand out in a sea of people making themselves legible, you must make yourself Hyperlegible: so easy to read and understand you rise to the top.

Once you become aware of Hyperlegibility, you see it everywhere.

[…]

Hyperlegibility isn’t good or bad. It’s neither and both. But it certainly is. Information used to be the highest form of alpha. Now everyone bends over backwards to leak it.

Through a combination of humanity getting ever-better at reading anything and humans becoming ever-more willing to make themselves legible, information is easier to find and understand than it’s ever been.

Source: Not Boring

Image: Egor Myznik

Less anonymity online is not going to make things better

A padlock

I use the Switzerland-based service Proton for my personal email and VPN, so news that the Swiss government is considering amending its surveillance law isn’t great news.

It looks like the specific thing they’re targeting is the metadata — i.e. not the content of the message, but where it was sent from and to whom. That’s the kind of information that Meta collects when people use WhatsApp. By way of comparison, Proton is more like Signal messenger in they don’t harvest this kind of metadata.

You might wonder why this is important, but putting together a story based on metadata isn’t exactly difficult. And, as is well-attested, if you have enough metadata, you don’t really need the content of the messages.

Consultations are now public and open until May 6, 2025. Speaking to TechRadar, NymVPN has explained how it’s planning to fight against it, alongside encrypted messaging app Threema and Proton, the provider behind one of the best VPN and secure email services on the market.

Authorities' arguments behind the need for accessing more data are always the same – catching criminals and improving security. Yet, according to Nym’s co-founder and COO, Alexis Roussel, being forced to leave more data behind would achieve the opposite result.

“Less anonymity online is not going to make things better,” he told TechRadar. “For example, enforcing identification of all these small services will eventually push to leaks, more data theft, and more attacks on people.”

[…]

“It’s not about end-to-end encryption. They don’t want to force you to reveal what’s inside the communication itself, but they want to know where it goes,” Roussel explains. “They realize the value is not in what is being said but who you are talking to.”.

“The whole point of security and privacy is not being able to link the usage to the person. That’s the most critical thing.” Roussel told TechRadar.

Source: TechRadar

Image: Arturo A

You do not have to participate in the lottery

Close-up of lottery balls with numbers moving

This is the first post I’ve come across from Paras Chopra, and I love the strapline for his blog: “Be passionate about the territory, not the map.” He’s still young, and the overall philosophy of life he outlines here is a touch naive (reminding me of myself at that age) but nevertheless it’s solid advice.

In most modern cultures, direct coercion doesn’t exist. Nobody can make you work harder than you want to. However, with our infinite algorithmic timelines, we’re immersed with indirect coercion.

But, you do not have to participate in the lottery. You can choose to quit. You can decide to not compete. You can choose to not participate in the lottery, where you’d almost likely lose more than you receive in return.

To be clear, this doesn’t mean inactivity. (Life is a game, where inactivity means death.)

Rather, what this implies is something very simple – don’t confuse what gets social approval with what’s right for you. Social approval exists to attract participants in a game that ultimately benefits the collective at the expense of an individual.

[…]

Once you overcome your desire to compete with others, you can actually just sit back and enjoy the outcomes that others compete to produce for you.

[…]

Let others compete hard to let you enjoy these things, while you do what you find most fun. It could be tending to your garden, working at a sensible pace, making coffee, building tiny weird games, or whatever else makes you come alive.

I hear you ask: won’t society collapse if everyone did this? I’d argue the opposite. If everyone did what they find most fulfilling, our net happiness will rise. Artifacts useful to the society will still be produced, except with less anxiety and burnout. People will still write books, but without an intent of it trying to be a bestseller but with an intent of honing and enjoying their craft.

Source: Inverted Passion

Image: dylan nolte

Things have changed

A tree from a distance along with grass and lots of blue sky

I know Martin Waller from the early days of Twitter when we were both teachers. The ‘Multi’ part of his ‘MultiMartin’ handle is due to his work on multiliteracies, the subject of his postgraduate study. He’s written plenty of papers and spoken at more than a few events.

Martin dropped off my radar a bit, but on reconnecting with him this week, he pointed me towards this post. I think we’re going to see a lot more of this over the next few years. How would Gwyneth Paltrow put it, “conscious decoupling” from online life, maybe?

I’ve… taken the decision to completely remove myself from social media including most online messaging services. I was once an advocate of the use of social media and digital technologies and have published book chapters and spoken at events around the world about it in education. I have made so many wonderful connections and friends over the years on the internet and through social media. However, things have changed. The landscape and current climate can be toxic and dangerous. I’ve stumbled upon comments on Facebook and Instagram which have made me feel sick. I’ve read different messenger channels where comments have ridiculed people for no good reason. There’s also just too much information out there and it isn’t helpful to be exposed to it all, all of the time. It makes it difficult to switch off, think and focus on what matters.

Source: MultiMartin

Image: Chris Barbalis

A bit of composting

Diagram showing too loops with the 'dominant system' and the 'emerging system'

There’s a lot of people thinking about endings at the moment. Not just because we’re getting into the post-Covid era now, but also due to things like the huge swathes of layoffs in the US, and the general economic downturn in the UK and other countries.

Things start and things end. That’s what they do. Change is constant, which is something difficult to get used to. In this post, Tom Watson talks about ‘composting’ which is a key part of the Berkana Institute’s Two Loops model, which he doesn’t actually reference but is extremely relevant.

I’d also say working openly helps with having good endings. The chances are that what has been learned during the project can take root elsewhere, and therefore live on, if people can see into the project. We should treat projects less as raised beds for pretty flowers and more like mycelium networks.

I thought about the social enterprise I had with my dad. I’ve got lots of things wrong in life, but doing that definitely wasn’t one of them. I learned a lot, about caring, kindness, not following the bullshit, and community. It ended, like all organisations do. But I was proud that we were able to financially support a group to continue meeting and chatting, and supporting each other. It wasn’t much, but it was something. And just last year we donated the last of our funds, around £6k, to a charity rewilding in Scotland, where the C for Campbell in my name comes from via my dad.

This felt like a good ending. A bit of composting. But not every organisation can pass on finances at the end, in fact it’s pretty rare, because often the root cause of the ending is money. But that doesn’t mean they don’t have things of value to pass on. They have resources, knowledge and wisdom. And I couldn’t help thinking about all that is lost, again and again when things end.

[…]

[W]e need to think broader about endings and composting. Not just when an organisation ends, but when programmes and projects end. What about all the research reports, the data from projects, the experiments that worked and those that didn’t. What about all that knowledge, what about all that potential wisdom. It’s why we spend time up front cataloguing all the things we do in a project along the way. It’s not perfect, but it’s something.

Source: Tomcw.xyz

Image: Innovation Unit

Discussing misinformation for the purpose of pointing out that it is misinformation

ContraPoints is “an American left-wing YouTuber, political commentator, and cultural critic” with the moniker coming from the fact that her content “often provide[s] counterargument to right-wing extremists and classical liberals.” She “utilizes philosophy and personal anecdotes to not only explain left-wing ideas, but to also criticize common conservative, classical liberal, alt-right, and fascist talking points,” with the videos having “a combative but humorous tone, containing dark and surreal humor, sarcasm, and sexual themes.”

I haven’t watched this yet, partly because I struggle to fit watching videos longer than 10 minutes into my day, and partly because it is absolutely the kind of thing I would watch by myself. The first few minutes are fantastic, I can tell you that much, with the focus being on conspiracy theories and misinformation.

Our current level of discourse, where random jokes are treated like they’re chiseled into stone by a divine hand

Clowns in full make-up and wigs crouching behind a wall while holding assault rifles. A guy with glasses in chinos and a t-shirt is standing with him. The caption next to him reads 'That guy from the Atlantic'

I’m not Very Online™ online enough to be able to understand what’s going on in popular culture, especially when it comes to the business models, politics, and norms behind it. So, thank goodness for Ryan Broderick, who parses all of this for all of us.

In this part of one of his most recent missives, Broderick talks about Barack Obama joining Bluesky, and the history (and trajectory) of people acting like brands, and brands acting like people. He says a lot in these two paragraphs, which in his newsletter he then goes to connect to the recent incident where a reporter from The Atlantic was accidentally added to a White House Signal war-planning group.

We live in interesting times, but mainly flattened times, where nothing is expected to have any more significance than anything else, and is presented to us via little black rectangle. At this point, I feel like I want to write another thesis on misinformation and disinformation in the media landscape. But perhaps it would be too depressing.

Bluesky made a big splash at South By Southwest earlier this month, with CEO Jay Graber delivering the keynote in a sweatshirt making fun of Mark Zuckerberg. When they made the shirt available for sale, it sold out instantly — in large part because it’s the first time ever that regular people have been able to give Bluesky money. The platform started as a decentralized, not-for-profit research effort, specifically trying to avoid the mistakes of Twitter, and before the shirt, it was still funded entirely by investors. Though, as of last year, they’re working on paid subscriptions. The Bluesky team has been swearing up and down that they’re working to avoid the mistakes Twitter/X has made, but if they eventually offer a subscription to Obama that treats his account identically to yours or mine, they’ve already made the most fundamental mistake here. Because the social media landscape Obama helped create, by blending the casual and the official, is the exact same one Bluesky was founded to work against. If a brand is a person, then a person has to be a brand, especially in an algorithmically-controlled attention economy that’s increasingly shifting literally everything about social media towards getting your money. And more importantly, if a government official or group has a social media presence, it has to be both a person and a brand.

And eight years after Obama walked this tightrope all the way to the White House, Donald Trump ran it up the gut. Trump, unfortunately, understands the delirious unreality of the person/brand hybrid better than maybe anyone else on the planet. Well, he might be tied with WWE’s Vince McMahon. But Trump’s first administration established a precedent of treating his tweets as official statements. And more directly than anything I’m blaming Obama for here, Trump sent us on the rollercoaster that just loop-de-looped past “a shitty website is all the transparency the US government needs” a few weeks ago. Now, there’s no difference between a post that’s an executive order, a commercial, or someone saying whatever bullshit is on their mind. In fact, it must serve as all of the above. On one end of this, you get our current level of discourse, where random jokes are treated like they’re chiseled into stone by a divine hand.

Source: Garbage Day

Image: Mastodon (various accounts posted this, couldn’t find the original)

Essentially a checklist of weird Instagram shit

Screenshot of video

I’m officially middle-aged, so have given up trying to understand under-40s culture. However, it’s still worth studying, especially when it’s essentially algorithmically-determined.

In this article, Ryan Broderick gives the example of Ashton Hall, a fitness influencer:

Hall’s video, which was originally shared to his Instagram page back in February, is essentially a checklist of weird Instagram shit. A dizzying mix of products and behaviors that make no sense and that no normal person would ever actually use or try, either because Hall figured out that they’re good for engagement on his page or because he saw them in other videos because they were good for those creators’ engagement.

And so we have things that people do, and are watched doing, because an inscrutable algorithm has decided that this is what people want to watch. So this is what is served, what people consume, and therefore the content which influencers make more of. And so it goes.

Culture shifts, and not always in good ways. As the Netflix series Adolescence shows, there is a sinister underbelly to all of this. But then that is, in itself weaponised to suggest that some way to fix or solve things is to ban digital devices. Instead of, you know, digital and media literacies.

In terms of physical safety, one of the most dangerous things you can do is laugh at someone who considers themselves hyper-masculine. But this is also the correct response to all of this stuff. It’s ridiculous. So the key is to point out how ridiculous it is to boys and young men, not in a way that condemns other (unless it’s someone like Andrew Tate) but rather just how it doesn’t make any sense.

“Fifteen years ago this routine would get you called gay (or ‘metrosexual’) but is now considered peak alpha male behavior. Something weird has shifted,” influencer and commentator Matt Bernstein wrote of Hall’s video. And, yes, something has shifted. Which is that these people know that there are a lot of very sad men that are going to get served their videos, and they’re fully leaning into it.

Guys like Hall are everywhere, with vast libraries of masculinity porn meant to soothe your sad man brain. Nonsexual (usually) gender-based content, like the trad wives of TikTok, targets your desires the same way normal porn does. Unrealistic and temporarily fulfilling facsimiles of facsimiles that come in different flavors depending on what you’re into. There’s a guy who soaks his feet in coke. A guy who claims he goes to a gun range at six in the morning. A guy who brings a physical book into his home sauna. A guy who’s really into those infrared sleep masks and appears to have some kind of slave woman who has to bow to him every morning before he takes it off. A guy who does the face dunk with San Pellegrino, rather than Saratoga. An infinitely expanding universe of musclemen who want to convince you that everything in your life can be fixed if you start waking up at 4 AM to journal, buy those puffy running shoes, live in a barely furnished Miami penthouse, have no real connections in your life — especially with women — and, of course, as Hall tells his followers often on Instagram, buy their course or ebook or seminar or whatever to learn the real secrets to success.

And I’ve been surprised that this hasn’t come up more amid our current national conversation about men. Because this is the heart of it. There are a lot of very large, very dumb men who want you to sleep three hours a night and invest in vending machines and do turmeric cleanses and they all know that every man in the country is one personal crisis away from being being algorithmically inundated by their videos. And as long as that’s the case, there’s really nothing we can do to fix things.

Source: Garbage Day

Image: Screenshot of video from Ashton Hall, fitness influencer

This confirms all my prejudices, I am pleased to say

Chart showing Verbal vs Writing scores in the GRE which shows Philosophy at the top right

I had a much-overdue catch-up with Will Bentinck earlier today, during which I discovered he holds a first-class degree in Philosophy! Obviously, Philosophy graduates are legimtimately The Best™ so my already high opinion of him scaled new heights.

Talking of scaling new heights, check out what happens when you plot Philosophy graduates' verbal against writing scores. All of which backs up my opinion that a Humanities degree is, in general, the best preparation for life. And, more specifically, it helps you with levels of abstraction that are going to be even more relevant and necessary in our AI-augmented future…

This data suggests (but falls a long way short of establishing) that if we want to produce graduates with general, across-the-board smarts, physics and philosophy are disciplines to encourage [and possibly also that accountancy and business administration should be discouraged (this confirms all my prejudices, I am pleased to say!)].

Source: Stephen Law

Just seek to understand, and remember we understand a lot by doing

An illustration showing a hand holding a smartphone with its camera app open, capturing an image of a cartoon purple cat. The camera screen displays the cat framed with a square outline, and several facial features—such as the eyes, ears, and nose—are highlighted with smaller boxes and connected by thin lines, illustrating an AI recognition process. The cat is slightly blurred in the background outside the phone screen, emphasising the focus on the camera's view.

After some success ‘vibe coding’ both Album Shelf and a digital credentials issuing platform called Badge to the Future, I’ve run a couple of sessions called ‘F*ck Around and Find Out’ relating to AI. I also knocked up a career discovery tool.

I’m sharing all of these links because I think people should be experimenting with generative AI to see what it can do and where the ‘edges’ are. I said as much when recording a episode for Helen Beetham’s imperfect offerings podcast which I’m hoping will come out soon.

Discussing some of this on the All Tech is Human Slack, Noelia Amoedo shared this post from her blog. After experimenting with other tools, she like me has settled on Lovable. I used the $50/month tier last month as I ran out of prompts on the $20/month level. You do get some for free.

What I think is explicit in Noelia’s post is the potential decentralisation of power this enables. What is implicit is that it takes curiosity to do this. As I signed off Helen’s podcast as saying (spoiler alert!) “my experience has been that most people are intellectually lazy, extremely uncurious, and want to take something off the shelf, implement it without too much thought, and be considered ‘innovative’ for doing so.”

You might say that being “intellectually lazy” is using AI. But I disagree. So long as you’re not just getting it to answer an essay question or come up with a some trite, fascist-adjacent imagery, then interacting with it involves choice (which model? what prompt?) and creativity.

I settled on Lovable, a Swedish company. The fact that they were European may have influenced my decision… or did I realize that later? I did subscribe, but only for a month (I seem to have learned to moderate my impulses just a bit). I took advantage of a business trip to write the code for my website, and I worked on it between meetings over two days. It must have taken about four or five hours overall, but it could have been done in one or two hours if the content had been ready when I started. It is possibly longer than what it would have taken with WordPress, I admit, but it gave me so much flexibility! I also own the code and I can take it anywhere. I did have to figure out some “techy” things, like how to turn the JavaScript7 code into something that I could upload to my hosting service, but Lovable was right there to provide any instructions I needed, and I have to confess I enjoyed bossing him (it!!!) around to change things here or there.

Asking for something in your own words and getting it done instantly almost feels like magic, just as graphical user interfaces felt magical when we were used to command lines. And if graphical user interfaces opened the digital world to so many people who were not digital till then, conversational user interfaces are already doing the same so much faster.

[…]

I can’t help but wonder: Could this be a way to decentralize and give digital power back to the people? Could small digital companies have a better shot at long-term survival in the new ecosystem that rises, or will power end up even more concentrated? Will this bring tighter software thanks to expert computer scientists empowered by AI, or will the digital space degrade due to a bunch of “spaghetti code” that is difficult to understand and maintain? Will no-code builders like Replit or Lovable bring people closer to understanding code, or will they have the opposite effect?

I have no answers, but let me just say that my curiosity brought me closer to the code than I had ever been in 25 years, and I’d encourage you to do the same: get just a bit closer from wherever your starting point may be. Just seek to understand, and remember we understand a lot by doing. The world is changing very quickly, and the new AI wave will most likely affect you whether you want it or not. You may as well understand what’s coming, and as we would say in Spain, take the bull by the horns.

Source: Noel-IA’s Substack

Image: Snapcat by Oleksandra Mukhachova & The Bigger Picture

The first fully-open LLM to outperform GPT3.5-Turbo and GPT-4o mini

OLMo 2 picture of a pelican on a bicycle

Thanks to Patrick Tanguay who commented on one of my LinkedIn updates, pointing me towards this post by Simon Willison.

Patrick was commenting in response to my post about ‘openness’ in relation to (generative) AI. Simon has tried out an LLM which claims to have the full stack freely available for inspection and reuse.

He tests these kinds of things by, among other things, getting it to draw a picture of a pelian riding a bicycle. The image accompanying this post is what OLMo 2 created (“refreshingly abstract”!) To be fair, Google’s Gemma 3 model didn’t do a great job either.

It’s made me think about what an appropriate test suite would be for me (i.e. subjectively) and what would be appropriate (objectively). There’s Humanity’s Last Exam but that’s based on exam-style knowledge which isn’t always super-practical.

mlx-community/OLMo-2-0325-32B-Instruct-4bit (via) OLMo 2 32B claims to be “the first fully-open model (all data, code, weights, and details are freely available) to outperform GPT3.5-Turbo and GPT-4o mini”.

Source: Simon Willison’s Weblog

Everyone is at least a little bit weird, and most people are very weird

Very yellow image depicting a person in a bathtub with their legs in the air. Their torso cannot be seen.

I like all of what I’ve read of Adam Mastroianni’s work, but I love this. I’d enthusiastically encourage you to go and read all of it.

Mastroianni discusses a time when he went for a job interview as a professor, realising that he had a couple of choices. He could be himself, or wear a mask. Ultimately, he decided to be himself, didn’t get the job, but everything was fine.

From there, he talks about the some of the benefits and drawbacks of conformance as a species, noting that taking the mask off is incredibly liberating. I know this from experience, as it was the exact advice given to me by my therapist in 2019/20 and, guess what? Afterwards some people thought I was an asshole. But then, so did people before. At least I know where I’m at now.

[H]istorically, doing your own thing and incurring the disapproval of others has been dangerous and stupid. Bucking the trend should make you feel crazy, because it often is crazy. Humans survived the ice age, the Black Plague, two world wars, and the Tide Pod Challenge. 99% of all species that ever lived are now extinct, but we’re still standing. Clearly we’re doing something right, and so it behooves you to look around and do exactly what everybody else is doing, even when it feels wrong. That’s how we’ve made it this far, and you’re unlikely to do better by deriving all your decisions from first principles.

Maybe there are some lucky folks out there who are living Lowest Common Denominators, whose desires just magically line up with everything that is popular and socially acceptable, who would be happy living a life that could be approved by committee. But almost everyone is at least a little bit weird, and most people are very weird. If you’ve got even an ounce of strange inside you, at some point the right decision for you is not going to be the sensible one. You’re going to have to do something inadvisable, something alienating and illegible, something that makes your friends snicker and your mom complain. There will be a decision tucked behind glass that’s marked “ARE YOU SURE YOU WANT TO DO THIS?”, and you’ll have to shatter it with your elbow and reach through.

[…]

When you make that crazy choice, things get easier in exactly one way: you don’t have to lie anymore. You can stop doing an impression of a more palatable person who was born without any inconvenient desires. Whatever you fear will happen when you drop the act, some of it won’t ultimately happen, but some will. And it’ll hurt. But for me, anyway, it didn’t hurt in the stupid, meaningless way that I was used to. It hurt in a different way, like “ow!…that’s all you got?” It felt crazy until I did it, and then it felt crazy to have waited so long.

Source: Experimental History

Image: JOSHUA COLEMAN

Taking natural-looking motion to yet another level

The above Boston Dynamics video is currently doing the rounds, with yet more human-like movement. It’s pretty impressive.

The usual response to this kind of thing is amusement tinged with fear. But, as with everything, it’s the systems within which these things exist that are either problematic or unproblematic. For example, we have zero problem with these being part of a research facility; we might have reservations if they were used in military situations. But then, we use drones in combat these days?

All of which to say: there are many dangerous and scary things in the world, and there are many dangerous and scary people in the world. The way we deal with these in a sustainable and low-drama way is through policies and process. So, unless I have reason to believe otherwise, I’m going to imagine the robots in these videos doing the jobs that currently require humans doing things that might endanger their health, such as rescuing people from burning buildings, inspecting nuclear reactors, and even doing very repetitive tasks under time pressure in warehouses.

Yes, AI and robots are going to replace jobs. No, it’s not the end of the world.

[L]est we forget who’s been at the forefront of humanoid research for more than a decade, Boston Dynamics has just released new footage of its stunning Atlas robot taking natural-looking motion to yet another level.

[…]

As humans learn to walk, run and move in the world, we start anticipating little elements of balance, planning ahead on the fly in a dynamic and changing situation. That’s what we’re watching the AIs learn to master here.

The current explosion in humanoid robotics is still at a very early stage. But watching Atlas and its contemporaries do with the physical world what GPT and other language models are doing with the world of information – this is sci-fi come to life. […]

These things will be confined to factories for the most part as they begin entering the workforce en masse, but it’s looking clearer than ever that humans and androids will be interacting regularly in daily life sooner than most of us ever imagined.

Source: New Atlas

In many ways, Silicon Valley looks less like capitalism and more like a nonprofit

Screenshot of share price going down

Yes, yes, more AI commentary but this is a really good post that you should read in its entirety. I’m zeroing on one part of it because I like the analogy of Silicon Valley looking less like capitalism and more like the nonprofit space.

TL;DR: just as we haven’t got fully self-driving cars yet, or any of the other techno-utopian/dystopian technology that was promised years ago, so we’re not about to all be immediately replaced by AI.

Yes, it’s going to have an impact. And, of course, lazy uncurious people are going to use it in lazy uncurious ways. But, the way I see it, we should be more interested in the structures behind the AI bubble. Because although there is useful tech in there, it remains a bubble.

In many ways, Silicon Valley looks less like capitalism and more like a nonprofit. The way you get rich isn’t to sell products to consumers, because you’re likely giving away your product for free, and your customers wouldn’t pay for it if you tried to charge them. If you’re a startup, and not FAANG, the way you pay your bills is to convince someone who’s already rich to give you money. Maybe that’s a venture capital investment, but if you want to get really rich yourself, it’s selling your business to one of the big guys.

You’re not selling a product to a consumer, but selling a story to someone who believes in it, and values it enough to put money towards it. That story of how you can change the world could be true, of course. Plenty of nonprofits have a real and worthwhile impact. But it’s not the same as getting a customer to buy a product at retail. Instead, you’re selling a vision and then a story of how you’ll achieve it. This is the case if you go to a VC, it’s the case if you get a larger firm to buy you, and it’s the case if you’re talking ordinary investors into buying your stock. (Tesla’s stock price is plummeting because Musk’s brand has made Tesla’s brand toxic. But Tesla’s corporate board can’t get rid of him, because investors bought Tesla’s stock—and pumped it to clearly overvalued levels—precisely because they believe in the myth of Musk as a world-historical innovator who will, any day now, unleash the innovations that’ll bring unlimited profits.) (Silicon Valley has, however, given us seemingly unlimited prophets.)

What this means for AI is that, even if the tech bros recognized how far their models are from writing great fiction or solving the trolley problem, they couldn’t admit as much, because it would deflate the narrative they need to sell.

Source: Aaron Ross Powell

Image: Maxim Hopman

Dozens of small internet forums have blocked British users or shut down as new online safety laws come into effect

Woman using laptop looking anxiously over her shoulder

You won’t see me linking to the Torygraph often, but in this case I want to show that it’s not just left-leaning very online people who are concerned about the UK’s Online Safety Act (2023) which came into force this week.

Neil Brown, a lawyer whose specialities include intenet, telecoms, and tech law, has set up a site collating information provided by Ofcom, the communications regulator. As far as I understand it, Ofcom couldn’t have done a worse job in conjuring up fear, uncertainty, and doubt. There are online forums and other spaces shutting down just in case, as the fines are huge.

This is an interesting time for WAO to be starting work with Amnesty International UK on a community platform for activists. Yet more unhelpful ambiguity to traverse. Yay.

Dozens of small internet forums have blocked British users or shut down as new online safety laws come into effect, with one comparing the new regime to a British version of China’s “great firewall”.

Several smaller community-led sites have stopped operating or restricted services, blaming new illegal harms duties enforced by Ofcom from Monday.

[…]

Britain’s Online Safety Act, a sprawling set of new internet laws, include measures to prevent children from seeing abusive content, age verification for adult websites, criminalising cyber-flashing and deepfakes, and cracking down on harmful misinformation.

Under the illegal harms duties that came into force on Monday, sites must complete risk assessments detailing how they deal with illegal material and implement safety measures to deal with the risk.

The Act allows Ofcom to fine websites £18m or 10pc of their turnover.

The regulator has pledged to prioritise larger sites, which are more at risk of spreading harmful content to a large number of users.

“We’re not setting out to penalise small, low-risk services trying to comply in good faith, and will only take action where it is proportionate and appropriate,” a spokesman said.

Source: The Telegraph

Image: Icons8 Team

In these times of chaos there seems to be a proliferation of new ways of thinking about the nature of reality springing up

Circular star trails in the night sky

There are some people who I follow who have done such interesting stuff in their lives, and whose new work continues to help me think in new ways. Buster Benson is one of these, and his latest post (unfortunately on Substack) is the beginnings of a choose-your-own-adventure style quiz about cosmology, a.k.a. “the nature of reality.”

So… I’ve been thinking a lot about cosmologies, and how in these times of chaos there seems to be a proliferation of new ways of thinking about the nature of reality springing up. If you have a few moments, can you take this short quiz and let me know which result you got, and how you feel about it?

Below you’ll find a some questions designed to help you identify and share your fundamental beliefs about the nature of reality (aka your cosmology). It’s not meant to be a comprehensive survey of all possible cosmologies, but rather a tool to help you identify your own cosmology and perhaps to spark a fun conversation with others. It’s also not meant to critique or judge any of the cosmologies for being more or less true, more or less useful, or more or less good — but rather meant to be window of observation into what beliefs exist out there amongst you all right now.

FWIW, I came out as the following, which (as I commented on Buster’s post) is entirely unsurprising to me:

Pragmatic Instrumentalism — You see scientific theories as powerful tools for prediction and control rather than literal descriptions of an ultimate reality. The value of materialism lies in its extraordinary practical utility and predictive success, not in metaphysical claims about what “really” exists. This pragmatic approach sidesteps unresolvable metaphysical debates while maintaining the full practical power of scientific methodology.

Source: Buster’s Rickshaw

Image: Good Free Photos

Love the casual vibe here

Stree poster saying: Why aren't we yelling

There’s a guy I no longer interact with because I found him too angry. But when I used to follow him, he used to talk about how Big Tech’s plan was to ‘farm’ us. It’s a very Matrix-esque metaphor, but given recent developments and collaborations between Big Tech and the government in the US, perhaps not incorrect?

Businesses like predictability. There’s nothing particularly wrong with that, per se — but, at scale, that can become a bit weird. Think, for example, how odd it is to be reduced to a single button shaped like a ‘heart’ on some social networks to be able to ‘react’ to what someone else has posted. There was a time when people would actually comment more, but the like button has reduced that.

Now, of course, some social networks allow you to ‘react’ in different ways: ‘applause’, perhaps, or maybe you might want to mark that something is ‘insightful’. We might consider doing so as being “better than nothing,” but is it? How does it rearrange our interactions with one another, allowing a particular technology platform (with its own set of affordances and norms, etc.) to intermediate our interactions?

Fast-forward to this month and, of course, Meta is experimenting with a feature that allows people to use AI to reply to posts. This is already a thing on LinkedIn. It’s going about as well as you’d expect.

AI is all over social media. We have AI influencers, AI content, and AI accounts — and, now, it looks like we might get AI comments on Instagram posts, too. What are any of us doing this for anymore?

App researcher Jonah Manzano shared a post on Threads and a video on TikTok showing how some Instagram users now notice a pencil with a star icon in their comments field, allowing them to post AI-generated comments under posts and videos.

[…]

In the video on TikTok that Manzano shared, three comment options are: “Cute living room setup!” “love the casual vibe here,” and “gray cap is so cool.” Unfortunately, all three of these are clearly computer-generated slop and take an already shaky human interaction down a notch.

It’s hard to know why you’d want to remove the human element from every aspect of social media, but Instagram seems to be going to try it anyway.

Source: Mashable

Image: Mimi Di Cianni

Their knowledge of life owed nothing to their sporadic presence in the inner sanctum of university colleges and departments

Scene showing cowboys. Caption reads: Their knowledge of life owed nothing to either their sporadic presence in the inner sanctum of university colleges and departments nor to various diplomas they had acquired by the most diverse and least respectable means

Warren Ellis' Orbital Operations is always worth reading, but the most recent issue in particular is a goldmine. I could post every image from it, including the Bayeux Tapestry one. But my mother reads this, and although I’m middle-aged, I still don’t want to disappoint her by including unnecessary profanity here 😅

What interests me about the above image is that this kind of re-contextualisation has been going on for so long, and massively pre-dates the internet. As someone who was a mid-teenager when first getting onto the internet, I’m too young to remember anything like this.

I need to think more about this, but, inspired by Episode #62 of the In Bed With The Right podcast, there’s a way in which you can see AI-generated imagery shared on social media by ‘conservatives’ as the fascist-adjacent version of this. Except, instead of being clever commentary, it’s lazy nostalgia-baiting.

1966: University of Strasbourg Student Union funds are lifted by Situationist sympathisers to print Andre Bertrand’s short comic RETURN OF THE DURUTTI COLUMN, which used stills from Hollywood movies in a process then termed detournement: familiar materials recontextualised in opposition (or at strange angles) to their original intent. This is something so common on the internet now that most people may not know there’s a word for it. The only useful Google hit I can find for Andre Bertrand today is, funnily enough, the Wikipedia page for an attorney who specialises in copyright law.

Source and image: Orbital Operations

Explorers launching into the Fediverse

Close up macro photo of a leaf

You may or may not be aware that I use a service called Micro.blog to run Thought Shrapnel these days. It’s the work of a small team, who do a great job. But I bump up against the edges of it quite a lot, especially when it comes to the newsletter/digest.

So I’ve thought for a while about switching to Ghost. Not only is it Open Source, but for the last year they’ve been figuring out ActivityPub integration. That’s the protocol that underpins Fediverse apps such as Mastodon, Pixelfed, and Bonfire. Long story short, there’s a lot to think about in terms of user experience and performance, so getting it right takes a while.

They’ve just announced that those using their Ghost(Pro) hosting can try out the ActivityPub integration for the first time. I’m very tempted to switch, but the cost ($40/month for the number of subscribers we have around these parts) is putting me off. Perhaps if I encouraged more people to become supporters…?

Today we’re opening a public beta for our social web integration in Ghost. For the first time, any site on Ghost(Pro) can now try out ActivityPub.

[T]hanks for your patience! It hasn’t been easy to get this far, but we’re excited to hear what you think as you become one of our very first explorers to launch into the Fediverse.

Source: Ghost Newsletter

Image: Markus Spiske

Don't just put up with how websites are presented to you by default!

Gif showing functionality of 'Boosts' feature in Arc: the font of a website is changed quickly and easily to something more readable

I’m sharing this not for the functionality of site, although I’m sure it provides a useful service. Instead, I’m sharing it to demonstrate a point: on the web, you can consume content however you choose. Don’t just put up with how it’s presented to you by default! I could go on about how this a key part of developing digital literacies, but I won’t 😉

In the above gif, I’m using the built in ‘Boosts’ feature of the Arc web browser to change the extremely poor choice of a tiny 8-bit font for something… more readable. This built-in functionality is something you can achieve on other web browsers using tools such as Tampermonkey and Stylish.

Source: Job.Hunt.Works

Affordable building materials out of agricultural waste bonded with oyster-mushroom mycelium

Golden gourmet oyster Mushrooms grown in an urban environment

Kenya, much like the UK, has a housing crisis. Mtamu Kililo has an unusual plan to address it: mushrooms. Having just finished Overstory, I am extremely receptive to the kind of nature-first solution that Kililo is proposing goes more mainstream.

One thing I’ve learned during my career to date is that most people are aspirational, meaning that proving that something works for rich or forward-thinking people makes it more palatable to others. For example, if the mushroom bricks discussed here feature on Grand Designs then they’re likely to get some traction.

I wish the world were different and that we could learn from societies that have lived in harmony with nature for millennia. But here we are. I hope that MycoTile is successful and creates a whole new sector of sustainable building materials using waste products. Fingers crossed!

I’m the co-founder and chief executive of MycoTile, which works to produce affordable building materials out of agricultural waste bonded with oyster-mushroom mycelium, a network of tiny filaments that forms a root-like structure for the mushroom.

[…]

MycoTile’s insulation panels have been installed in a few projects, including in student accommodation, and we have seen that the material works. It greatly reduced the sound travelling from one room to the next, and helped to regulate the temperature inside. This insulation is affordable, costing about two-thirds of the price of conventional insulation. And unlike those materials, it can be composted at the end of the building’s life.

[…] The insulation tiles are a success; now we’re working on developing a sturdy block like a brick. When we can produce a brick to build external walls and partitions, it will be a huge step towards affordable housing.

Source: Nature (archive version)

Image: Rachel Horton-Kitchlew

Work is part of our lives, a big part to be sure, but what if it wasn’t our whole life?

Equipment at the US Coast and Geodetic Survey geophysical observatory

I’m composing this on the train on my way back home from a CoTech gathering in London. Post-pandemic, I spend 99% of my working life at home, especially now that there so few in-person events. So I was really glad to spend time among like-minded people and talk about ways we can work together a bit more.

The people I spent time with today are part of a work community. Some of them I count as friends, but none live very close to me. I am, therefore, quite detached from my geographic community, with my only really connection to the place where I live coming through shopping, my kids' sporting activities, and my (temporarily paused) gym membership.

This post by Mike Monteiro responds to a reader question about whether they can be happy even if they hate their job. Monteiro, who identifies as Gen X, harks back to his youth, talking about how school and work was compartmentalised so that people could be themselves outside of those strictures.

The problem now is that, for reasons Monteiro goes into, work invades our homes and community life, hollowing and emptying it out until it’s devoid of meaning. As a result, we have, perhaps unrealistic expectations of what work can provide for us. Except, of course, if you own your own business and work with your friends. I just wish they were nearer by and I got to hang out with them more.

Perhaps I need more offline hobbies.

When we think of our community, we’re likely to picture the people at work. Because it’s where we’re spending the majority of our time. This is by design, but it isn’t our design. It’s the company’s design. In that earlier era, when we still drank from garden hoses, losing a job sucked, but it mostly didn’t take your community with it. In this new era, losing a job means getting gutted. Not only do you lose your paycheck, but you lose access to all the people and places where you used to have your non-work-but-actually-at-work fun. And while your old co-workers will promise that you can still hang out outside work (they mean it, by the way), they’ll soon realize that they don’t really do much “outside work.”

The pandemic put a little bit of a dent in this plan, of course, because you were now working from home, but they adjusted quickly to this by keeping you on wall-to-wall Zoom calls for 12 hours a day. Which wasn’t completely sustainable (even though they said it would be) because when your zoom calls are happening on a laptop facing the window, you eventually start peeking out at what’s beyond that window, and you get curious…

Work is part of our lives, a big part to be sure, but what if it wasn’t our whole life?

They want you to return to work, to their simulation of happiness and community, because they’re afraid that if you don’t you might remember that there was a time when you were free. And you were happy. And you drank from garden hoses.

Source: Mike Monteiro

Image: NOAA

An incomplete collection of charts

Line graph showing pandemic-related sharp dip

Some pretty stark charts showing the impact of the Covid-19 pandemic on the world, in this case using data about the USA. What I like about the way they’ve done this is that, for many of them, ‘trend lines’ are included which allow you to see whether things have gone back to ‘normal’ or stayed… weird.

Over here, unlike other developed nations, the UK has experienced a sharp rise in the post-pandemic number of health-related benefit claims. The number of 16 to 64 year-olds on disability benefits in England and Wales now stands at 2.9m, an increase of almost a million. Around half of those are mental-health related claims. It is ridiculous, therefore that the Labour government is planning to cut disability benefits while “help[ing] those who can work into work.” Forcing, more like.

My wife and I “celebrated” our 40th birthdays during lockdown, so the pandemic has pretty much cleaved our lives into two: there was what came before and now what has come after. At the moment, I massively preferred what came before. How about you?

Decades from now, the pandemic will be visible in the historical data of nearly anything measurable today: an unmistakable spike, dip or jolt that officially began for Americans five years ago this week.

Here’s an incomplete collection of charts that capture that break — across the economy, health care, education, work, family life and more.

Source & image: The New York Times

The magic of browsing the web isn't quite gone, but it's waiting to be reinvented

Web source attribution shown on these tools with inline citations, source bars, and hover cards.

I’ve been following Paul Stamatiou, aka ‘Stammy’, since he was at Georgia Tech. He’s worked at Twitter, co-founded a couple of startups, and now seems to be pivoting into AI search.

What I like about Stammy’s deep dive blog posts is that he blends an understanding of technology through the lens of design with a level of pragmatism you don’t usually see. In this post, he talks about the serendipity of the web we’re increasingly losing in favour of AI answers. But, at the same time, he talks about the convenience and value of those answers, wondering if there’s another way of using these tools to serve human curiosity.

To my mind, there’s a design angle here on the ‘supply’ side, but there’s also an opportunity to cultivate AI literacies in order to use these kind of tools effectively.

Browsing with traditional search engines wasn’t exactly seamless but you were in the driver’s seat. You would occasionally end up on some obscure personal site or forum that was unexpectedly right up your alley.

Those magical detours defined our web experience, enriching us with insight and creativity while establishing human connections. Today’s instant answers with these AI tools sacrifice this beautiful chaos that once made the internet so captivating. But there’s hope—a new kind of browsing experience might just be around the corner.

[…]

Serendipity while browsing the web wasn’t just a byproduct of how the web began to form. It stretched our empathy by exposing us to diverse voices, nudged us out of echo chambers, and kept our web from becoming monotonous.

That style of surfing the web is fading away. What social media hasn’t already taken over, or search engines haven’t already diluted by sending us to ad-laden mainstream sites, is now steadily being eroded by AI-powered answer engines.

[…]

Today’s AI answer engines face fundamental challenges that go beyond the feeling of nostalgia for the old web, surfacing some bespoke notion about serendipitous discovery, or wanting more indie content bubbled up. The real issues—weak attribution, black-box decision-making, and homogenized responses—threaten to flatten rather than enrich how we use the web. Browsing the web isn’t and shouldn’t be a one size fits all experience.

The current generation of AI tools faces a prodigious challenge: how can they surprise and delight you when they know almost nothing about you?

[…]

AI personalization doesn’t have to be about consuming every detail of your life or cloning yourself. We’ve fended off giving too much personal data to individual companies, why start now? You don’t need to be digitally cloned to help you throughout your day. Even a little bit of info about your past interactions, goals, and interests can go a long way to delivering experiences that resonate more deeply with you.

[…]

Even with light personalization, AI answer engines could meaningfully tailor their responses to you. They would know you’re very technical and experienced with the topic at hand to skip the basics and dive deeper into technical concepts. They would know you’ve been writing about technology for 20 years and really enjoy the underlying ways things work and always want that deeper understanding. And so many other things that might seem like minute details at first, but combined really add up.

[…]

The magic of browsing the web isn’t quite gone, but it’s waiting to be reinvented.

Source & image: Paul Stamatiou

When in doubt, go see a doc!

This image shows a blue xray image of a person's chest - it shows the ribs, the faint outline of a heart, and other organs. The image features yellow squares surrounding the organs (left lung, trachea, right lung, heart, diaphragm) - each of them feature labels such as: normal, midline, normal size. There is text in the right bottom corner in white which states 'no abnormalities detected'.

As people who read my weeknotes will be aware, I’ve got some kind of undiagnosed heart condition going on at the moment. This has meant that I’ve gone from running three times per week (20-25km) and hitting the gym three times a week, to not being able to walk very far without my heart rate spiking.

This week, I’ve had Angina and arrhythmia ruled out, but I’ve got to have further tests such as an echocardiogram and MRI. The consultant said that he was going to be “open and honest” with me that “it might take a long time” to figure out what’s wrong with me, and that I’m going to have to make some “lifestyle changes.”

NHS staff are under pressure and, for the most part, do a great job. They don’t have a long time to spend with patients, which is why I’ve started using AI tools such as Perplexity to investigate my symptoms. It’s important to know that this is alongside the tests I’m having done; unless I go private (not going to happen) I’ve got a bit of wait time before my next tests.

I know that some people reading this will be shocked that I would discuss my health details with an LLM. But, I would say, don’t knock it until you’ve tried it. I run one of the world’s most secure operating systems on my mobile device, but I’m telling some AI about my medical issues? Yep. I contain multitudes.

In the post I’ve excerpted below, Brett McKay from The Art of Manliness gives 30 ways in which AI can help make your life easier. I’ve used about half of them. But, even if you use private mode, a temporary window, a local LLM or some other way of obfuscating your identity, I’d give it a try.

There are some legit concerns to have about AI, to be sure. It’s not always accurate and not yet great at everything. But if used in the right way and with the right stance, AI can be really handy, improving your life and making it better and easier. It’s like having a personal assistant without paying personal assistant prices.

[…]

Figure out a health issue. I’ve replaced Dr. Google with Dr. ChatGPT. I’ll just type in my symptoms (and sometimes upload a picture — don’t forget that AI can analyze images!) and ask ChatGPT about the potential causes. For example, I’ve been having some pain in my quads lately. I couldn’t determine if it was a muscle strain or a tendon issue. So I told ChatGPT where my pain was, what the pain felt like, when I experienced it, and what precipitated the pain. ChatGPT helped me figure out that I’m dealing with a muscle strain and not a tendon issue. My daughter had some bumps show up on her foot the other week, and I couldn’t tell what it was. So I snapped a pic, uploaded it to ChatGPT, and asked, “What is this?” ChatGPT ruled it a bug bite. Should you rely on ChatGPT to diagnose you for big issues? No, but it can help you troubleshoot minor problems and know when to consult a healthcare provider (when in doubt, go see a doc!).

Explain medical test results. I’ve used ChatGPT to help explain medical test results I’ve gotten in terms I can understand. My father-in-law recently got an EKG and the cardiologist only spent a few minutes giving him a cursory explanation of the results. My FIL then went home and ran the results through AI, which gave him a lot more details.

Source: The Art of Manliness

Image: Elise Racine / Better Images of AI

Heaven is high, and the emperor is far away

Building blocks are overlayed with digital squares that highlight people living their day-to-day lives through windows. Some of the squares are accompanied by cursors.

Ben Buchanan, until recently the Biden administration’s AI special adviser, joins Ezra Klein to discuss lots of things about AI in the past, present, and future. It’s particularly interesting because, as Klein points out, Buchanan “is not a guy working for an A.I. lab. So he’s not being paid by the big A.I. labs to tell you this technology is coming.”

There’s a lot of US-specific conversation, but I’m most interested in the impact on labour markets, which I think we’re already seeing even with Artificial General Intelligence (AGI), and the rise of the surveillance state.

One of the things which is most markedly different between my childhood and that of my two teenagers is the amount of surveillance and tracking of everyday life which is seen as ‘normal’. Even small, relatively prosaic things, such as messaging app showing by default that a message has been ‘read’. Or location tracking, whether it’s via Snap Maps, ANPR cameras on roads, or CCTV in city centres.

As mentioned in a previous post, you need to be very careful about norms, policies, and laws you encode into AI enforcement / policing / standardisation.

I would decompose this question about A.I. and autocracy or the surveillance state into two parts.

The first is the China piece of this. How does this play out in a state that is truly, in its bones, an autocracy and doesn’t even make any pretense toward democracy?

I think we could agree pretty quickly here that this makes very tangible something that is probably core to the aspiration of their society — of a level of control that only an A.I. system could help bring about. I just find that terrifying.

As an aside, there’s a saying in both Russian and Chinese: “Heaven is high, and the emperor is far away.”

Historically, even in those autocracies, there was some kind of space where the state couldn’t intrude because of the scale and the breadth of the nation. And in those autocracies, A.I. could make the force of government power worse.

Then there’s the more interesting question in the United States: What is the relationship between A.I. and democracy?

I share some of the discomfort here. There have been thinkers, historically, who have said that part of the ways we revise our laws is when people break the laws. There’s a space for that, and I think there is a humanness to our justice system that I wouldn’t want to lose.

We tasked the Department of Justice to run a process and think about this and come up with principles for the use of A.I. in criminal justice. In some cases, there are advantages to it — like cases are treated alike with the machine.

But also there’s tremendous risk of bias and discrimination and so forth because the systems are flawed and, in some cases, because the systems are ubiquitous. And I do think there is a risk of a fundamental encroachment on rights from the widespread unchecked use of A.I. in the law enforcement system that we should be very alert to and that I, as a citizen, have grave concerns about.

Source: The Ezra Klein Show (archived version)

Image: Emily Rand & LOTI

It errored out half an hour in, which is when I decided to throw in the towel

I’m kind of sick about posting about AI so much, but I did want to point out something with another new thing. This video is for Manus “a general AI agent that bridges minds and actions: it doesn’t just think, it delivers results.” Apparently, you can just leave it to do things for you. Except, erm, you can’t.

Here’s the promo video:

Notice that the three examples are essentially about ranking, whether that’s people, property, or stocks. Also, are all of the fake job applicants it shows male? As TechCrunch found, doing something else which probably should be pretty straightforward, causes some showstopping problems.

“Glimpses of AGI,” indeed 🙄

I asked the platform to handle what seemed to me like a pretty straightforward request: order a fried chicken sandwich from a top-rated fast food joint in my delivery range. After about 10 minutes, Manus crashed. On the second attempt, it found a menu item that met my criteria, but Manus couldn’t complete the ordering process — or provide a checkout link, even.

Manus similarly whiffed when I asked it to book a flight from NYC to Japan. Given instructions that I thought didn’t leave much room for ambiguity (e.g. “look for a business-class flight, prioritizing price and flexible dates”), the best Manus could do was serve up links to fares across several airline websites and airfare search engines like Kayak, some of which were broken.

Screenshot of Manus.ai struggling to order fried chicken sandwiches

Hoping the next few tasks might be the charm, I told Manus to reserve a table for one at a restaurant within walking distance. It failed after a few minutes. Then I asked the platform to build a Naruto-inspired fighting game. It errored out half an hour in, which is when I decided to throw in the towel.

Source and image: TechCrunch

The more we embed today’s norms into these systems, the harder it will be to course-correct later

A neural network comes out of the top of an ivory tower, above a crowd of people's heads. Some of them are reaching up to try and take some control and pull the net down to them. Watercolour illustration.

WAO wrote a report on Harnessing AI for environmental justice which serves as the background to this post by Christian Graham from Friends of the Earth. He points out that LLMs can be problematic in terms of human progress in at least a couple of important ways.

First, referencing existing norms and frameworks could make it more difficult for new, exciting, and innovative ideas to break through. Second, AI enforcement / policing / standardisation could make it difficult for us to correct course away from a problematic trajectory.

It’s well worth a read. Even though I’ve found LLMs to be extremely useful as a ‘thought partner’ for coming with new angles on existing problems, I’m not sure the majority of people would use them in such a nuanced way. After all, the US Secretary of State is already threatening to use AI to revoke visas of foreign students who appear “pro-Hamas”.

(On a more prosaic level, can we expect much nuance when, last month, ‘Google’ was the sixth most-searched term… on Google? People, including politicians and policymakers, value convenience even/over everything else)

No one built this to be unjust. It’s just an AI doing its job: optimising for carbon cuts, personal accountability and cold, hard data. Yet baked into it are early 21st-century blind spots: climate as your burden, not the system’s. And AI as some impartial oracle.

This is technological lock-in. Not a grand conspiracy, but a thousand quiet choices, hardening into a future we can’t easily unwrite.

[…]

[S]tability isn’t always a good thing. The same features that make AI useful could also make it inflexible, resistant to new ideas and blind to the possibility of better alternatives:

  • Old biases risk becoming permanent – AI trained on today’s moral and economic frameworks might prioritise corporate-led sustainability initiatives over grassroots action or overvalue GDP growth as the main measure of success.
  • Future breakthroughs could struggle to take hold – Just as a Victorian-trained AI might have discouraged Darwin from publishing, future scientists, activists, and policymakers could find themselves fighting against AI-driven inertia.
  • AI governance might become self-referential – If AI models continually cite their own outputs as authoritative sources, they could create self-reinforcing knowledge loops, making early 21st-century assumptions feel like eternal truths.
  • Technology stops being a tool for change – If AI systems shape environmental, legal, and economic policies based on past precedent, it becomes harder for movements that challenge the status quo to gain traction. Instead of being a force for progress, AI becomes a force for keeping things exactly as they are.

We can already see early signs of this happening. AI is being used in policing in ways that prioritise past data  over future possibilities. The more we embed today’s norms into these systems, the harder it will be to course-correct later.

Source: Friends of the Earth

Image: Jamillah Knowles & We and AI

Well, what have we here?

Small icons representing weather, mental energy, exercise, anxiety, and yesterday rating

Bryan Mathers is a friend, WAO collaborator, and creator of the Thought Shrapnel logo. He has an occasional newsletter which is always a ray of sunshine in my inbox. His latest, in particular, was delightful.

If I check in with myself at all in the mornings, it’ll be to figure out what I call my ‘emotional temperature’. This approach is much more granular, nuanced and considered. I like it!

Even though there’s a chance that over time I’ll extend these reflecto-glyphs so that it takes me two hours to get started in the morning, I quite like the little rhythm that I’ve found. Saying hello to the morning, and adopting the curious position of “well what have we here?” to the day ahead. Otherwise, I’m a sucker for the first distraction or burning issue that the day presents, and a hostage to the irrational anxieties that lie at the back of my head…

Source: The Visual Thinker

We’ve been trained to believe that the way things are is the way they have to be

the sun is shining through a window in the dark

I’m increasingly of the opinion that being on any centralised platform is waste of time, at least in the long run. I’m not even sure what I’m doing on LinkedIn these days, as it’s certainly not useful for finding an actual job.

The Fediverse is the future; at least the future I want to inhabit.

The fediverse is a jailbreak. It’s not a product, not a single platform, it’s not something you can buy stock in or use to enrich yourself at the cost of our shared humanity. It’s a network of independent, interconnected social platforms, all running on open protocols like ActivityPub. It’s an ecosystem where you - not some incellionaire obsessed with eugenics - own your digital identity. Where your social graph belongs to you, not an algorithm’s shifting fucking whims. Where moving from one service to another doesn’t mean losing everything you’ve built and everything you’ve ever said.

We’ve been trained to believe that the way things are is the way they have to be. That Meta, Google, and whatever the hell Twitter is calling itself today are the price of admission to digital society. That you can’t have discovery without algorithmic engineering. That the internet was supposed to become a shopping mall where every interaction is measured in ad revenue. But none of this was inevitable. It was built this way—on purpose. And the fediverse offers something else: freedom.

[…]

The fediverse won’t succeed just because it’s better. It will succeed if and only if people choose it. If they reject the idea that being trapped in someone else’s ecosystem is just the cost of existing online. If they stop believing that “free” means surrendering ownership of your own connections, your own history, your own data. If they see that the internet wasn’t built to be a factory for engagement metrics and AI-generated content farms. It was built to connect us, not silo us to pad a wealth-extremist’s bank account.

[…]

The fediverse isn’t a distant dream—it’s here, right now, waiting for you to step outside the walls and see what’s possible.

Source: Joan Westenberg

Image: Ayrus Hill

The world is changing before our eyes, and it’s essential that we understand in which direction

A black and white photo of a spiral of feathers

Interesting stuff from Mihnea Măruță, who explains the origin of the philosophy of accelerationism, which goes back to at least the Italian futurists. They provided the underpinnings for Mussolini’s fascism, and the updated version of this idea (“neo-reactionism”) underpins what’s happening in the US at the moment.

This movement, which considers democracy to have become an obstacle to capitalism, is called the neo-reactionary movement, abbreviated NRx.

It is also referred to as the “Dark Enlightenment.” That term belongs to the English philosopher Nick Land, and it’s also the title of one of his books.

And so we reach the philosophical heart of the matter, because, in fact, the most fitting explanation for everything that astonishes us in America today is a philosophical one, from which political, economic, and social consequences follow.

[…]

Accelerating society toward the future, at full throttle—this would be the path envisioned by the neo-reactionary movement.

[…]

[T]he accelerationist vision is to speed things up, to unleash a hyper-capitalism, a total techno-capitalism, an anarcho-capitalism—call it what you like—a system of private governance of a monarchical type, in which the president is the general manager, the CEO of a community-company, and citizens become shareholders of that state, transformed and run according to capitalist principles of efficiency and profit.

In the accelerationist view, nation-states are obsolete and need to be replaced by a global network of city-states and autonomous territories, if possible built from scratch.

[…]

For more details, you can look at a few existing projects: Culdesac in Arizona, Prospera in Honduras, Cabin in Texas, Neighborhood SF, NOMAD, or Praxis.

[…]

The world is changing before our eyes, and it’s essential that we understand in which direction. We’re not just dealing with whims or improvisations.

Source: Mihnea Măruță

Image: Logan Voss

So dull, so dehumanizing

Crowd with person holding sign saying 'Do we look like bots?'

The thing that capitalists control is, unsurprisingly, capital. This controls how western societies work: if workers, minorities, or any other oppressed group gain too much power, capitalists use economic levers and controls to put people back in their place.

In this post, Audrey Watters reflects on the pandemic realisation about the need for student-centric education reform, and what has happened since. Which is the opposite of that.

I recently saw someone say (citation needed) that we might view this as the ongoing fallout from the pandemic, when, for a brief period, some workers were able to win some concessions from their employers: the ability to work from home, most notably. There has been an uptick in unionization and in public support for unions in the US too, reversing the past few decades' trends for both. So we shouldn’t be surprised that the response from management has been to fire people, to threaten to fire people, and to threaten to replace people with robots (or in today’s parlance, with chatbots or AI agents).

[…]

Arguably, much of the push for even more technology, even more automation, even more control, even more surveillance in the classroom is a consequence of the pandemic too. Recall the astonished recognition – it was ever-so-brief – that teaching was the most important and most difficult job and that teachers needed to not just be praised but compensated much much much better. And now, that’s been erased — purposefully; and conservatives and technologists and politicians alike feel it necessary to put teachers (read: women) back “in their place.”

[…]

During the pandemic, schools had the opportunity to radically rethink what education might look like – as a workplace for teachers, to be sure, but also as a place of growth and exploration and learning for students. Instead many opted to double-down on the worst aspects of school, to embrace some of the worst sorts of surveillance technologies – test-proctoring software, most egregiously.

There was — again, briefly — widespread revulsion about education technology during the pandemic. So dull, so dehumanizing. And now? Now we’re handing everything over to AI – a complete and total surrender to “the digital” at the expense of “the human,” letting the demands of the technology industry dictate pedagogy and research and assessment rather than respond to the needs of teachers or students or parents or communities.

Source: Second Breakfast

Image: Waldemar

Why are we sucking history through a straw?

Abernathy kids on a motorbike

One of the things I’m a bit concerned about when it comes to generative AI is the replacement of historically accurate images and text with AI slop. There’s a difference between what we’d like to be true, or what is plausible, and what actually happened.

Mike Caulfield shares a good example of this with the Abernathy brothers, who made several cross-country trips across the USA unaccompanied by adults. The AI version of this is nothing like the badass version above.

In 1910, at ages 10 and 6, they rode horseback from Oklahoma to New York City to meet ex-President Theodore Roosevelt, and became nationwide celebrities. They followed this up with a transcontinental horseride in 1911, setting a speed record. In 1913, pooling some money from earlier stunts, they bought an Indian 1000cc motorcycle, and drove it from Oklahoma to New York City — then retired from public life at the wise old ages of 13 and 10.

[…]

This is honestly the coolest picture I’ve seen in a month. Every bit of it is rich with meaning and resonance.

What are we doing here? Why are we sucking history through a straw?

I am at a loss for words.

Source: The End(s) of Argument

The only ruling principle is the total absence of purpose or seriousness

5 Steps to a Flattened Life

Frustrated by a lack of work coming in, and seeing people with 1/100th of my knowledge, experience, and skill being lauded on LinkedIn and elsewhere, I complained to my wife. She said a bunch of things in return, but one of them was something along the lines of, “the thing you don’t realise is that people just want to be entertained.”

This ‘State of the Culture’ speech is very US-centric, but nevertheless captures something about the specific moment we’re facing. Not just a Thomas Friedman-style ‘flat’ world, but a flattened world. As Cory Doctorow says, it’s five giant websites filled with screenshots of the other four. When’s the resistance to all this going to come? Or are we too busy being amused ourselves to death?

Twenty years ago, the culture was flat. Today it’s flattened.

“Corporations didn’t intend to make the culture stagnant and boring. All they really want is to impose standardization and predictability—because it’s more profitable.”

I still participate in many web platforms—I need to do it for my vocation. (But do I really? I’ve started to wonder.) But now they feel constraining.

Even worse, they now all feel the same.

Instead of connecting with people all over the world, I now get “streaming content” 24/7.

Facebook no longer wants me stay in touch with friends overseas, or former classmates, or distant relatives. Instead it serves up memes and stupid short videos.

And they are the exact same memes and videos playing non-stop on TikTok—and Instagram, Twitter, Threads, Bluesky, YouTube shorts, etc.

Every big web platforms feels the exact same.

That whole rich tapestry of my friends and family and colleagues has been replaced by the most shallow and flattened digital fluff. And this feeling of flattening is intensified by the lack of context or community.

The only ruling principle is the total absence of purpose or seriousness.

Source: The Honest Broker

Image: Ted Gioia / The Honest Broker

Reality, if you don’t sufficiently attend to it, has a tendency to kick your ass

White sheep in front of blackboard reading '2+2=5'

Dorian Taylor is a certified Smart Person making things that I don’t even understand. What I want to focus on here, however, is his overview of Dr Kate Starbird’s recent talk.

Starbird gives an overview of the self-reinforcing right-wing disinformation ecosystem where participants are rewarded — implicitly or explicitly — for creating an alternate reality. It’s no surprise, therefore, that those who are taking charge in around the world are those that can use and amplify this disinformation.

It’s worth pointing out, as [this Bluesky post] (https://bsky.app/profile/suchmayer.bsky.social/post/3ljrwhvvitk2j) does, that it’s not as if the intention to create alternate realities hasn’t always been there. It’s just that with the consumer technologies available these days, there’s more scope for “free-for-all improv theatre.” It’s all entertainment; it’s a game with extremely serious consequences.

[The] right-wing (dis)information ecosystem is highly participatory, per Dr. Starbird: “improvised collaborations between witting agents and unwitting though willing crowds of sincere believers”. It’s a free-for-all improv theatre with all sorts of incentives for various actors to participate, from individual curiosity-seekers, to media personalities, to hostile state actors. It rewards participation and affords a conduit for any participant to get up on stage and perform for the audience, influence elite talking points, shape policy, and win fabulous cash prizes. The right-wing disinformation ratchet operates as follows:

  • Political elites set the frame (“immigrants are criminals”),
  • random participants make spurious claims (“they’re eating your pets”),
  • the claims get boosted on social media by various influencers,
  • they get aggregated and concentrated on—and further boosted by—fringe websites and blogs,
  • Joe Rogan (or whoever) repeats the most salient claim on his podcast,
  • the claim eventually makes it on Fox News,
  • and is then rebroadcast from the bully pulpit,
  • which energizes the mob and motivates them to continue.

The rest of the media ecosystem, by contrast, still adheres to a top-down model of broadcasting polished and vetted messages, researched and workshopped by professionals of rapidly dwindling efficacy.

[…]

On its face, fighting back against the right-wing bullshit apparatus amounts to a massive collective action problem—the very kind, with some adjustments, that said apparatus is great at mobilizing. So step one is to copy them. They have an advantage, though, which I find troubling: they don’t have to worry about reality.

[…]

How do you fight an information war when you’re on the side of reality? When reality usually doesn’t matter? The key, I am beginning to suspect, is except when it does. Reality, if you don’t sufficiently attend to it, has a tendency to kick your ass. This can be wielded as a weapon.

Source: The Making of Making Sense

Image: Elimende Inagella

The profits they make without risking anything are enormous

Illustration showing how money flows from victims to scammers

This is a difficult article to excerpt, mainly because it follows the story of one woman and, in doing so, opens up a vast network of professionally-run scam businesses. It’s well worth a read to understand what’s going on in the world, and how much of a whole economy exists swindling people out of money.

Much of this is based on social engineering, as banks and other financial institutions have put in place technical counter-measures. But the landscape is always shifting, and it’s lucrative business when it’s possible to ‘earn’ $20,000/month for working in a well-lit, efficiently-run office.

For crime groups looking to turn a profit, operating a call center can be more lucrative than trafficking drugs, since the margins are higher and the risks of being caught much lower, according to an investigator from Spain’s Mossos d’Esquadra, the Catalonian police, who specializes in investment fraud networks.

[…]

Divided up into different language “desks,” the call center agents use fake names that match the country they are tasked with calling — in the Georgian call center, agents calling Spain had names like “Esteban Fernandez,” while the Russian desk was led by “Kseniya Koen” and the English desk employed “Mary Roberts.”

[…]

“The profits they make without risking anything are enormous,” said the investigator, who was not authorized to speak on the record about his work.

[…]

“Pure phishing frauds, where you can steal a person’s credentials … have been reduced by technical measures,” said Sakari Tuominen, Detective Superintendent of the country’s National Cyber-enabled Crimes Unit. “But then there’s this crime of fraud through social engineering, where a person first builds a trusting relationship … and of course, no technical tools or inhibitions help [in that case].”

Spanish lawyer Mauro Jordan de la Peña said that in his own country, there is a lack of urgency among the police and judiciary to pursue the cases of online investment scam victims, because it is not an issue that triggers as much social alarm as other crimes.

“In Spanish society there’s a sense that, hey, if you are scammed because you wanted to earn a lot of money, then that’s on you,” Jordan said in an interview with OCCRP.

Source: OCCRP

Image: James O’Brien/OCCRP

Three character traits will cause particular problems: caring too much, having values and having standards.

Mural in Mumbai, India, reading 'Life does not get better by chance, it gets better by change'

This post by Stephen Kell, an academic at King’s College London’s Department of Informatics, was on the front page of Hacker News recently. It resonated with me, even though I’m not in the same position as him employment-wise. We all have a finite time on this earth, so it’s worth prioritising getting on and doing stuff that you deem important, without bureaucracy and other annoyances (like ‘business development’!) getting in the way.

It’s time to admit that I’m in a mess… It’s a little over ten years since I boldly presented one of my research goals at that 2014 conference. The reception was positive and gratifying. I still get occasional fan mail about the talk. So where’s the progress on those big ideas? There’s certainly some, which I could detail—now isn’t the time. But frankly, there’s not enough. In the past year I turned 40… in fact I’m about to turn 41 as I write this. It’s time to admit I’ve landed a long way from the place where that bright-eyed 30-year-old would have hoped his future self to end up. […]

If not there, then where am I? In short, I’m trapped in a mediocre, mismanaged version of academia that is turning me into a mediocre and (self-)mismanaged individual. The problem is far from one-way traffic: if I were a more brilliant or at least better self-managing individual, I could no doubt have done better. But for now, it’s the mess I’m in. I need to get out of it, somehow.

Although the academic life has felt like my vocation, my current experience of it is one I find suffocating. If you care about things that matter—truth, quality, learning, reason, knowledge, people, doing useful things with our short time on this planet—you are a poor fit for what most of our so-called universities have become in the UK. Three character traits will cause particular problems: caring too much, having values and having standards.

Looking around, what I seem to observe is that whereas others can hack it, it’s an atmosphere I find I am very poorly adapted to breathing. In short, far too much of my time is spent on regrettably meaningless tasks, and the incentives mostly point away from quality. I am trapped in only the bad quadrants of the Eisenhower matrix. To the extent that my mind is “in the institution”, it makes me feel pretty horrible: under-appreciated, over-measured, constantly bullshitted-to, serially misunderstood, encouraged to be a bureaucrat “system-gamer” and discouraged from both actually doing what I’m good at, and actually doing good. There is an enormous and exhausting cognitive dissonance generated by not only the stereotypical bureacracy but also the new, non-stereotypical corporate noise, the institutionally broken attitudes to teaching and the increasingly timewasterly tendencies of [organisations claiming to be] research funders.

It’s not all bad! There are still moments when it feels like my teaching is meaningful and my research time is going on things that matter. Those moments are just too few to sustain me, given the oter stuff. […]

If I’m not just to muddle on like this until I die or at least retire (it’s scarily little time until I can claim my pension!), there’s an imperative either to get out of this suffocating environment or at least to open up a vent… perhaps one large enough to crawl out of later. However, I’m not ready to Just Quit just yet. Being a citizen of the academic world is useful; I don’t have to go for a metaphorical knighthood. My new plan is to focusing more on basic sufficiency. I want to use my citizenship to do good. There is still some will in the machine to do good, even though the the default pathways increasingly strangle such impulses; walking out would squander this meagre but still valuable capital.

Source: Rambles around computer science

Image: ShareYaarNow

What do you *like* to do?

five people standing while talking each other

You should already subscribe to Kai Brach’s Dense Discovery newsletter but, if you haven’t yet had the pleasure, I’d like to introduce it by way of his opening to the latest issue.

I find the question “what do you do?” so difficult to answer. I find it difficult enough even explaining what WAO does, to be honest, given how much of a range of stuff we do for clients. So the suggest to reframe the question is a welcome one, and helps shifts our collective conversations away from hierarchical, company-centric ways of being.

The dinner party question we all dread and ask in equal measure: ‘What do you do?’ It’s a peculiar cultural shorthand that attempts to compress our entire existence into a job title and industry. The way we’ve elevated professional identity to the centrepiece of selfhood comes at a considerable cost, narrowing our understanding of value and connection to something that can be neatly added to LinkedIn.

Simone Stolzoff beautifully captures this over-identification with work in his recent TED talk. You might remember his book The Good Enough Job (featured in DD214), which examines this theme at length. In this condensed pitch for a less work-centric life, he reminds us that “we are all more than just workers. We’re parents and friends and citizens and artists and travellers and neighbours. Much like an investor benefits from diversifying the sources of stocks in their portfolio, we, too, benefit from diversifying the sources of meaning and identity in our lives.”

Stolzoff offers three practical steps to help us ‘diversify’ our identities: creating time sanctuaries where work is forbidden, filling those spaces with activities that reinforce alternative identities, and joining communities that couldn’t care less about our professional achievements. It’s blindingly obvious advice, though it feels almost radical in our achievement-obsessed culture.

“If we want to develop more well-rounded versions of ourselves, if we want to build robust relationships and live in robust communities and have a robust society at large, we all must invest in aspects of our lives beyond work. We shouldn’t just work less because it makes us better workers. We should work less because it makes us better people.”

“This is about teaching our kids that their self-worth is not determined by their job title. This is about reinforcing the fact that not all noble work neatly translates to a line on a resume. This is about setting the example that we all have a responsibility to contribute to the world in a way beyond contributing to one organisation’s bottom line.”

And here’s a bit of dinner party advice that might just salvage our collective sanity: rather than asking ‘What do you do?’, Stolzoff suggests adding two small words: ‘What do you like to do?’

“Maybe you like to cook. Maybe you like to write. Maybe you do some of those things for work. Or maybe you don’t. ‘What do you like to do’ is a question that allows each of us to define ourselves on our own terms.”

In a world obsessed with productivity metrics and career trajectories, perhaps this tiny adjustment to our social script might help us recognise each other not just as economic units, but as the complex, multifaceted beings we truly are.

Source: Dense Discovery #328

Image: Antenna

It’s not just making packed lunches

woman in black long sleeve shirt sitting on chair

At MozFest 2019, I revealed as part of a group discussion that I don’t use Meta’s products — including WhatsApp. A participant, who knew I have sporty kids, asked how I managed to organise their activities. “My wife uses Facebook and WhatsApp,” I said. “Oh, you outsource the labour?” was their withering reply.

Six years later, and I still don’t use WhatsApp and Hannah (my wife) still sorts all of that stuff out. Any time we’re having a disagreement, she does tend to bring this up as an additional mental burden. So I was interested in this article by Chloë Hamilton in The Guardian where she and her partner “swapped” mental loads for a week.

I recommend reading the whole thing. It probably won’t be what you expect, and it both provided me with some insights and confirmed some things I’d suspected all along. I’d direct you in particular to the bit (not quoted below) about when their kids head off to be looked after by their grandparents…

It starts with a discussion in the car, prompted by the washing up. It wasn’t done that morning. The laundry needs hanging up, too, and someone has forgotten to make the packed lunches. We need to pay the dog walker, fix the broken bath panel, work out why our toddler has started waking in the night and book our youngest in for a haircut. Then there’s a half-planned playdate to confirm, meals to plan and all those family WhatsApp group messages that need a response.

Historically, women in heterosexual relationships have carried the heft of the mental load, also known as cognitive household labour. This is the behind-the-scenes work, often intangible, that goes into running a household. It’s not just the jobs: it’s thinking about those jobs. The true extent of this work, invisible and embedded as it is, can be hard to define; an iceberg of tasks concealed beneath waves of tradition, expectation and stereotypes. It’s not just the doing, it’s the remembering, the realising, the anticipating, the assigning. It’s not just making packed lunches, it’s getting food in, making sure it’s nutritious, checking the lunchboxes are washed and ready. It’s knowing the toddler has gone off bananas and the baby can’t eat chunks of apple yet. This work is unpaid, unseen and, often, unappreciated.

[…]

We conclude, as we pull into the childminder’s, that if long-term change is the objective, talking about it with your partner isn’t just recommended, it is essential – and, actually, doesn’t at all nullify the purpose of the chat. Opening up a dialogue has allowed us to have a respectful, thoughtful and continuing conversation about how we are feeling and faring. We have made the invisible visible.

Source: The Guardian

Image: Helena Lopes

Exploring the many ways in which people interact with place

Antique postcard showing standing stones (Orkney)

You’re not going to get many recommendations from me to sign up to a Substack-powered newsletter (why?) but I’m going to give you one today. I’m delighted to say that Northern Earth, a magazine I initially subscribed to on the recommendation of author Warren Ellis, is continuing under a new editor, and they have created a new monthly newsletter called The Hare.

Founded in 1979, Northern Earth is the world’s longest-running journal combining interests in archaeology, folklore, neoantiquarianism, earth mysteries, phenomenology and psychogeography – exploring the many ways in which people interact with place.

Don’t get me wrong, the community around the magazine is a broad church, so there’s some things in the magazine at which I raise an eyebrow. But, on the whole, I like alternative explanations of history and pre-history. As Hercule Poirot, the famous fictional detective once said, “If the little grey cells are not exercised, they grow the rust.”

Here are some links from the first issue of The Hare:

Read John Palmer’s new article at our website: A Saxon alignment and pagan cult site in Twente, the Netherlands

11,000-year-old Indigenous village uncovered near Sturgeon Lake, Canada [University of Saskatchewan]

[Podcast] Broken Veil – ‘A psychogeographic journey into the strangeness close at hand’

Image: from the inaugural issue of The Hare

Not an aesthetic of seduction, but of brutal carelessness and blatant ignorance

Screencap from BBC Verify video showing posts from Instagram users

I didn’t really want to share anything about US politics this week, but I can’t not share this thread from Roland Meyer about the Trump ‘Gaza Riviera’ video that you’ve probably seen by now. Or at least read about. It’s the first time I’ve come across Meyer, who is DIZH Bridge Professor in Digital Cultures and Arts at the University of Zurich and the Zurich University of the Arts (ZHdK).

He says that sharing the video “without trying to understand how it shows our new normal” is problematic, but ignoring it isn’t an option either. I’m sharing his analysis of mainly because ‘platform realism’ is a useful term to avoid just vaguely gesturing to something as ‘AI generated’. The point is that creating this kind of aesthetic is easy and cheap in a world of consumer-grade AI tools which require no particular talent to use.

No-one thinks that this is real, the point is rather to provoke, distract, and demonstrate a brazen disregard for international law. It’s a fabrication of a different reality, something that is absolutely part of the fascist playbook. It started with the ‘Gulf of Mexico’ naming dispute, and continues from there. After all, when you’re trying to refute bullshit, you’re not doing anything else.

There is nothing crazy about this. This horrific video epitomizes the logic of current meme-fascism: it’s a colonization of the imagination that precedes and aestheticizes real neo-imperialist violence, dressed up in the glossy looks of stock imagery, influencer content and online scamming
1/

⁠⁠The vision, if you want to call it that, presented in this video is cheap, superficial, inconsistent and hardly capable of convincing, seducing or deceiving anyone. But that’s exactly the point: Trump doesn’t need to convince anyone, he can use his raw power. The video shows how little he cares
2/

Unlike 20th century propaganda, #platformrealism is not an aesthetic of seduction, but of brutal carelessness and blatant ignorance. The power of a video like this lies in the sloppiness of its means - anyone could produce it, without expertise, without investment, without even watching it
3/

Pointing out AI-induced glitches and hallucinations like the bearded dancers therefore seems to me to be beside the point. Such traditional critique of representation is helpless when those in power are no longer concerned with the details of representation, but only with ›flooding the zone
4/

Source: Bluesky

Image: BBC Verify

It always seemed ripe for mapping and distilling the patterns together more interactively

A Pattern Language graphic

I discovered this via the Are.na newsletter. It’s a kind of social bookmarking and discovery service that strongly influenced some of the early iterations of MoodleNet (RIP).

Anyway, A Pattern Language has been referenced in multiple places I’ve paid attention to over the years, but the book is usually expensive. That’s why I’m pleased that there’s now this interactive version, which links the ideas it contains to one another, as a hypertext.

A Pattern Language is the second in a series of books which describe an entirely new attitude to architecture and planning. The books are intended to provide a complete working alternative to our present ideas about architecture, building, and planning—an alternative which will, we hope, gradually replace current ideas and practices.

My friends and I have long been fans of this book, and attempt to use its patterns in our own homes and spaces. The book is 1,200 pages long, with countless intertextual connections. It always seemed ripe for mapping and distilling the patterns together more interactively. All text, except this section, is excerpted from the book.

Source: A Pattern Language

Cracking cheese, Gromit

Keir Starmer with subtitles saying 'there a famous slogan in the United Kingdom'.

Sometimes, the internet reminds you how weird and wonderful humans are. On this occasion, it was the above image which people replied to with their best slogans. I’m including some examples from a dedicated Reddit thread, but I think first prize has to go to Giles Turnbull who captioned the image “Cracking cheese, Gromit.”

Absolutely perfect. Nailed it.

“I have a cunning plan” (Humannylies)

“Don’t piss on me and tell me it’s raining.” (LondonEntUK)

“Nice to see you… to see you, nice” (Naturally_Fragrant)

“…there aren’t no party like an S Club party.” (nasted)

“You’ve been Tangoed” (LeicsBob)

Source: Reddit

Everything happens in a place

Example of a map resulting from a series of conversations

Years ago, when I was a teacher, we had an influx of Polish children to our school. After a couple of weeks, one of the senior leaders gave the new pupils a map of the school and asked them to add a smiley face, a neutral face, or an unhappy face depending on how they felt about those spaces. Many of them couldn’t speak English, so it was a really important way of starting to figure out how, where, and why they were feeling included or excluded.

I was reminded of this when reading this post about “maps as conversations”. It’s a way of understanding how different groups understand and move through spaces. It’s a way of having multiple maps of the territory, rather than just the official versions.

I’m having a chat with Tom Watson, one of the authors next week, so I’m looking forward to finding out more. Especially as one of the areas used as an example is Sheffield, where I went to university as an undergraduate.

What started as an idea that “everything happens in a place” has turned into a practical, replicable initiative. Through the power of citizen-led mapping, Sheffield’s communities have already begun to shift how we talk about places, how we deliver services, and how we make decisions together.

We think other areas now have the chance to adapt and benefit from this model. By grounding policy and practice in how people actually live, connect and identify, localities can cultivate greater participation, more cohesive relationships and a richer sense of community ownership.

Source: Data for Action

Image: (from the post}

Once upon a time, personal or honest takes were regarded as awkward and professionally desperate

Person dressed in leather with face paint, a studded neck collar, and purple hair, using a laptop

You can definitely tell how old someone is by the way they use LinkedIn. If someone announces a job move by saying they have “some personal news” they are definitely Gen X. I’m a Xennial so just super-awkward on every social platform; I’m torn between wanting to look/sound “grown up” and just wanting to share all of the things everywhere.

LinkedIn, though, is absolutely crushing it in terms of engagement and revenue. If you think about it, the main feed is very different to how it used to be, and that’s a function of younger generations entering the workforce, as well as more people working from home. It’s difficult to remember to be super-professional when you’re still in your running gear and you’ve just hung up the washing between Zoom calls.

As this article discusses, the interplay between the generations on LinkedIn is really interesting. It’s more likely that older generations are believers in working from an office in a hierarchical structure; it’s more likely that younger generations are opposed to both of those things. I still find it an annoying place to kind-of-have-to hang out. I’d prefer it didn’t exist, or at least prefer that it had a different overall vibe. But, while it is the main professional network, I’m going to share all of the things there.

Last week, Microsoft revealed that the site is seeing record engagement, with comments on the platform up 37% year over year. Moreover, millions of people have now signed up for LinkedIn Premium; the company revealed that it’s earned more than $2 billion in revenue from its AI-laden premium service in the last 12 months. Indeed, LinkedIn more broadly contributes healthily to Microsoft’s bottom line — the division delivered $16 billion in revenue in 2024, more than The New York Times, Zoom, and Docusign put together.

[…]

Younger generations tend to reflexively reject spending time on the same online social media platforms as their parents (here’s looking at you, Facebook). But, unfortunately for the youth, you do tend to turn into your parents as you age, and LinkedIn is no exception. As Gen Z has entered the workforce, they seem to have no problem with the site, with the number of American Gen Z users on LinkedIn estimated to have risen 14% in 2024, per Insider Intelligence. But those younger users post on the site in a very different way.

Once upon a time, personal or honest takes were regarded as awkward and professionally desperate on LinkedIn. But being a so-called “thinkfluencer” in 2025 is increasingly a strategic way to boost your “personal brand” (should you desire to have such a thing). After a number of conversations with small business owners over the last few months, the reality is that posting every single day on LinkedIn, even if it feels uncomfortable at times, is a bona fide way of bringing in leads.

[…]

With more and more people dipping their toes into remote working, definitions of what’s socially acceptable to share at work are also changing. It’s this interplay between generations and workforces (work-from-home vs. work-from-office), and the fact that some make serious money from the platform, that makes LinkedIn — for lack of a better word — weird.

Source: Sherwood

Image: Never Dull Studio

But I blogged about that in detail a while back, shall I send you a link later?

Green typewriter with paper that says WRITE SOMETHING

Writing is a form of extended thinking. Or, at least it is for me. Which is why I think that blogging, either here on Thought Shrapnel, on my personal blog, on the WAO blog, or occasionally over at ambiguiti.es, is so useful.

Giles Thomas points out that a blog is the equivalent of showing your contributions to open source software via a GitHub profile. It’s a good analogy: by working openly and sharing your thinking, you create a link for every significant thought or connection you’ve made between ideas. That means, in my case at least, I can search for my name and the topic, and a bunch of things come up.

Although I haven’t included it in the quotation below, the original rationale for Thomas' post is whether it’s worth blogging in the age of AI. It’s an unequivocal YES for me, but then I’m the kind of person who donated my doctoral thesis to the public domain. Nobody “owns” ideas, so by blogging you’re helping contribute to the sum total of human knowledge.

I said that you will be vanishingly unlikely to make a name for yourself with blogging on its own. But that doesn’t mean it’s pointless from a career perspective. You’re building up a portfolio of writing about topics that interest you. Imagine you’re in a job interview and are asked about X. You reply with the details you know, and add “but I blogged about that in detail a while back, shall I send you a link later?” Or if you’re aiming to close a contract with a potential consulting client in a particular area – wouldn’t it be useful to send them a list of links showing your thoughts on aspects of exactly that topic?

Your GitHub profile shows your contributions to open source and lets people know how well you can code. But your blog shows your contributions to knowledge, and shows how well you can think. That’s valuable!

Source: Giles' blog

Image: Markus Winkler

If a waiter has to explain the “concept” behind a menu there is something wrong with the menu

Black and red sign saying SANDWICHES SHAKES MALTS & DRUGS

For those unaware, for the past 15 years, Jay Rayner has been the food critic for The Guardian and its sister publication, The Observer. The latter has a ‘food monthly’ supplement which is usually referred to by the acronym OFM.

In Rayner’s last column for OFM he dispenses lots of fantastic advice. Here’s are my favourite parts, some of which can be used as metaphors and are therefore more widely applicable.

Individual foods are not pharmaceuticals; just eat a balanced diet. There is nothing you can eat or drink that will detoxify you; that’s what your liver and kidneys are for. No healthy person needs to wear a glucose spike monitor; it’s a fad indulged by the worried well. As is the cobblers of being interested in “wellness”, because nobody is interested in “illness”. People have morals but food doesn’t, so don’t describe dishes as “dirty”. And stop it with the whole “clean eating” thing. It’s annoying and vacuous.

[…]

Tipping should be abolished. It’s wrong that restaurant staff should be dependent on the mood of the customer for the size of their wage. They should be paid properly. It works in Japan, France and Australia. It can work in the UK. All new restaurants should employ someone over 50 to check whether the print on the menu is big enough to be read, the lighting bright enough for it to be read by and the seats comfortable enough for a lengthy meal. If a waiter has to explain the “concept” behind a menu there is something wrong with the menu.

Source: The Observer

Image: Damien Santos

I call it the feediverse. It's not a joke.

Lots of birds sitting on power lines

Dave Winer has launched something called WordLand which uses RSS as the federated specification underpinning a federated social network. This is instead of ActivityPub, which underpins the Fediverse (Mastodon, Pixelfed, etc.) or ATProto, which powers Bluesky.

I immediately ran into an error about API calls, with no suggestion how to fix it. I’m also not entirely sure how textcasting is different to just, blogging? This approach seems a bit post hoc, ergo propter hoc. Just as with something like Delta Chat which piggybacks on email for chat functionality, this uses blogs for microblogging 🤔

Thanks to John Johnston for bringing this to my attention, and for pointing me towards PootleWriter which looks simple and great for quickly getting things on the web.

WordLand is designed to be the kind of editor you use in a social app like Bluesky or Mastodon, but with most of the features of textcasting.

WordLand is where we start to boot up a simple social net using only RSS as the protocol connecting users. Rather than wait for ActivityPub and AT Proto to get their acts together. I think we can do it with feeds and start off with immediate interop without the complexity of federation. I call it the feediverse. It’s not a joke, although it may incite a smile and a giggle. And that’s ok.

Source: Scripting News

Image: Juno Jo

The idea stood up to more than casual scrutiny

A4 zine folding guide

There is enough going on in the world and in my life at the moment that Thought Shrapnel does not need to deal with. Instead, dear reader, I present to you PJ Holden’s microfiction newsletter, A4. Just like Jay Springett’s Start Select Reset zine.

A4 is a single A4 sheet of paper with seven little nano-fiction stories. The sheet is designed to be printed and folded in such a way that you end up with a lovely little standee with a pulp fiction like cover. (Or you could just read them on screen! but I promise, it’s worth the effort!)

Issue Zero was my test fire to see if the idea stood up to more than casual scrutiny and so far, a surprising number of people have downloaded it (I have the stats!)

Source: PJ Holden’s Blog

Image: from the author’s post

Sometimes life seems really short, and other times it seems impossibly long

Screenshot from Gina Trapani's site

Matt Muir links to My Life in Weeks by Gina Trapani, which she adapted from Buster Benson. He got the idea from Tim Urban. You can create your own version at weeksofyour.life.

I like the idea of representing one’s life like this, for several reasons. First, as Urban’s initial post points out:

Sometimes life seems really short, and other times it seems impossibly long. But this chart helps to emphasize that it’s most certainly finite. Those are your weeks and they’re all you’ve got.

Personally, 2025 has been terrible for me so far. But we’re only a few weeks in! The rest of it could be great, who knows?

The boxes can also be a reminder that life is forgiving. No matter what happens each week, you get a new fresh box to work with the next week. It makes me want to skip the New Year’s Resolutions—they never work anyway—and focus on making New Week’s Resolutions every Sunday night. Each blank box is an opportunity to crush the week—a good thing to remember.

Source: (various)

Image: Screenshot from Gina Trapani’s site

Nostalgia tells you that your personal history wasn’t just scary or tragic; it helped make you who you are

a pile of old photos and postcards sitting on top of each other

I’ve been listening to an interesting interview over the past couple of days where Rick Rubin, the legendary music producer, interviews Will Smith. One of the things Smith says is that people have a real “thirst” for nostalgia at the moment, wanting to go back to a time when things were a little bit better.

This article by Olga Khazan in The Atlantic looks at some of the research into this topic, noting that that reflecting on past times, even if they were tough, gives them a story, a sense of self, and a sense of solidarity with others. Ultimately, it seems, nostalgia is all about creating a sense of security, which absolutely makes sense.

Nostalgia for terrible things may sound absurd, but many people experience it, for reasons that speak to the way people make meaning of their lives. The central reason for this phenomenon, according to researchers who study nostalgia, is that humans look to our past selves to make sense of our present. Reflecting on the challenging times we’ve endured provides significance and edification to a life that can otherwise seem pointlessly difficult. The past was tough, we think, but we survived it, so we must be tough too.

To be sure, part of the explanation is that people tend to romanticize the past, remembering it more rosily than it actually was. Thanks to something called the “fading affect bias,” negative feelings about an event evaporate much more quickly than positive ones. As a difficult experience recedes in time, we start to miss its happier aspects and gloss over the challenges. And nostalgia is usually prompted by a feeling of dissatisfaction with the present, experts say, making the past seem better by comparison.

[…]

There are few large, robust studies on this topic, but some experimental research has shown that nostalgia provides a feeling of authenticity and a sense of connection between your past and present selves. Because of this, we often get nostalgic for consequential moments in our lives. “People are nostalgic for things that give their lives meaning or help them feel important,” says Andrew Abeyta, a psychology professor at Rutgers University.

[…]

Reminiscing about a difficult experience reminds you that at least you survived, and that your loved ones came to your aid. “The fact that those people did those things for you, or were there for you, reassures you that you have your self-worth,” Batcho said. Research by the psychologist Tim Wildschut and his colleagues found that people who wrote about a nostalgic experience went on to feel higher self-esteem than a control group, and they also felt more secure in their relationships.

Source: The Atlantic

Image: Jon Tyson

All things good should flow into the boulevard

water fountain beside park

Warren Ellis comments on fractured and fragmented the world is now in terms of keeping up with what other people are thinking and producing. You can’t trust the algorithms any more, and there are precious few people doing the curatorial work across multiple streams — which is why I appreciate people like Jason Kottke, Tina Roth Eisenberg, Stephen Downes, Matt Muir and, of course, Ellis himself.

I was talking to a publisher friend last night about Patreon, on which he spends a lot of time looking at comics creators. I do not – I didn’t find out until last night that I still have an account on there, and I’m still not sure how that’s possible. Anyway. His thing was: he sees lots of work-in-progress and one page updates and stuff there, but how has it not become a primary delivery system for digital comics? Like, for your membership fee, or an extra dollar or whatever, here’s the first issue of my comic for you to read online or download, and the next one will be on this day next month, and so on. Maybe there’s a limited physical print edition that I’ll offer for sale a month later. And there’s no deal for collection, so maybe you’ll never see this again.

(It occurred to me this morning that any writer could do that with ebooks, too, and then whack them out to Amazon two months later.)

My thing was, does anyone really want to fracture common culture and a shared marketplace any more than it already is? And an hour later, I thought, common culture is a delusion of my age. Common platforms, perhaps, but platforms are contingent and temporary. We are all “creators” now.

Is there even a digital comics store and reading app that a majority of people use now?

(There is a supposed quote by Pericles I heard years ago but never sourced: “All things good should flow into the boulevard.”)

This note from my friend, which I summarise here to preserve it for myself, has gotten me thinking about that entire space. It’s less walled-off from the world than Kickstarter-style crowdfunding, perhaps? (I think Kickstarter and Backerkit et al are great: my concern over work crowdfunded in that style doesn’t transmit anything into the general culture. Again, probably a fixed idea from my age and background.) I’m always wondering how much great work I might be missing simply because I can’t find it browsing around real or virtual shelves.

Source: Warren Ellis Ltd

Image: Jonas Stolle

The revolution, it turns out, is boringly iterative

Onion Collective’s Petal Model of Regenerative Transition

Jessica Prendergrast is part of Onion Collective, which undertook an experimental research project last year funded by the Joseph Roundtree Foundation. In this first of a series of four essays, Prendergrast explores new models for transforming systems beyond capitalism, ultimately coming up with a new three-part Ebbing/Evolving/Emerging model they name Onion Collective’s Petal Model of Regenerative Transition.

I like the emphasis on language and metaphor in shaping our understanding of change, as well as the potential for innovation at the periphery of society. It’s a hopeful piece, which is what we need in such times. I’ve included the image of the model which gives some examples, to aid with understanding.

At Onion Collective, positioned as we are in the ‘niche’, the further we delved into system transformation or replacement, the more conscious we became of how all these models, without fail, position the radical as outliers — trying to break in — rather than centre-ing them as dominant forces of change, reinforcing their radicalism as oddity. Whether unintentionally or a symptom of internalised capitalism, this seemed to reflect how anything which challenges the status quo is targeted as ‘radical’ or ‘extreme’. Rebecca Solnit explores this phenomenon in her extraordinary book, Hope in the Dark. She explains how those who are marginalised, especially when they try to push through to the centre, are often portrayed as dangerous and unsavoury, defamed and even criminalised. This was as true for civil rights activists, suffragettes, and abolitionists, as it is now with climate activists and post growth academics. They are portrayed as rabble on the fringe, somehow both naive (or swampy or woke depending on your era) and dangerous — a kind of system-level dismissal or sniggering at those suggesting an alternative to the mainstream, and one which feels particularly galling when that mainstream is creaking (burning, flooding, dying) under the weight of the damage it has created.

[…]

To reflect where the radical power for change really lies, as a starting point, we wanted to convey emerging and alternative futures practitioners less like oddities or outliers and more like a new beginning at the heart of the model. We wanted a model that better represented the viewpoint and power of all those under the waterline (whether in the global south or left-behind places) and that could begin to change what was ‘thinkable’. In the metaphorical battle for hearts and minds, we wanted to find a way to position the dominant but damaging paradigm as on the edge — a far more logical placement in the sense of the ‘extremeness’ of a position that is destroying itself and the planet. And, we put the alternative future in the centre of the action rather than the outskirts of possibility.

[…]

Intentionally, the three rounds of petals are layered up on top of one another reflecting that the new lives alongside the old even as it envelops it and, as we learned from Gibson-Graham, that all sorts of non-dominant regime activity is always happening alongside the mainstream. The layering and overlap also recognises that most of those building alternative futures are operating in what we have described previously as a liminal space. They tend to be working in multiple arenas all at once, and balancing the contradictions and complexities of such all the time. They may be doing a fair bit of ebbing, evolving and emerging work at once, by virtue of existing in the contradictory reality of late-stage capitalism.

The revolution, it turns out, is boringly iterative.

[…]

An example version here shows a host of areas added. In this case, these are petal sets that felt especially relevant to our practice at Onion Collective. For example, our work is at the nexus of culture, community and climate work; it takes in explorations of land use and ownership; knowledge production and sharing and alternative demonstrations of economics in place and at a systems level.

[…]

Viewed from the centre of this flower, where it’s all fresh and new and emergent, far away from the browning decaying edges of the old regime’s petals, it becomes easier to imagine the end of capitalism. From here, to overplay and mix up the metaphors, the ice above the waterline could melt away. From here, looking at all the activity and dreams and hopes that are coalescing under the surface, it’s not so difficult to conceive that maybe, we’ve just been looking in the wrong place, blinded by the light of capitalism. After all, the history of the world tells us that dominant paradigms dominate only until they don’t anymore. Eventually they give way, either gently or in turbulence, to something else. Change is inevitable, new petals will unfurl and a different kind of flower will come into bloom.

Source: Onion Collective

Image: taken from the article

It's better than strapping clay crocodiles to people’s heads and praying for the best

Conceptual illustration by Aleksandra Czudżak showing a person suffering from migraine.

As I have written about several times over the years, I am a migraineur. They have been with me all my adult life, and I can’t really remember what life was like without them. Preventative medication makes me drowsy, so along with some relieving triptans my only relief is rest.

I’ve sent this article in Nature to my immediate family, who seem to confuse certain migraine phases with neurodiversity. The diagram below, in particular, is extremely valuable to anyone who is a migraineur, or who knows one. It’s easy to focus on the visual disturbances and the cranial pain, but there’s much more to it than that.

Illustration showing the cyclical nature of migraines

And, as I’ve discussed before, post-migraine is an extremely fertile time for me, with it being the perfect time for creative pursuits, including coming up with new or innovative ideas. That being said, I’m not entirely sure that the benefits outweigh the drawbacks, which is why I would absolutely explore new drugs which help prevent them in novel ways.

For ages, the perception of migraine has been one of suffering with little to no relief. In ancient Egypt, physicians strapped clay crocodiles to people’s heads and prayed for the best. And as late as the seventeenth century, surgeons bored holes into people’s skulls — some have suggested — to let the migraine out. The twentieth century brought much more effective treatments, but they did not work for a significant fraction of the roughly one billion people who experience migraine worldwide.

Now there is a new sense of progress running through the field, brought about by developments on several fronts. Medical advances in the past few decades — including the approval of gepants and related treatments — have redefined migraine as “a treatable and manageable condition”, says Diana Krause, a neuropharmacologist at the University of California, Irvine.

[…]

Researchers are trying to discover what triggers a migraine-prone brain to flip into a hyperactive state, causing a full-blown attack, or for that matter, what makes a brain prone to the condition. A new and broader approach to research and treatment is needed, says Arne May, a neurologist at the University Medical Center Hamburg–Eppendorf in Germany. To stop migraine completely and not just headache pain, he says, “we need to create new frameworks to understand how the brain activates the whole system of migraine”.

[…]

Researchers found that changes in the brain’s activity start appearing at what’s known as the premonitory phase, which begins hours to days before an attack (see ‘Migraine is cyclical’). The premonitory phase is characterized by a swathe of symptoms, including nausea, food cravings, faintness, fatigue and yawning. That’s often followed by a days-long migraine attack phase, which comes with overwhelming headache pain and other physical and psychological symptoms. After the attack subsides, the postdrome phase has its own associated set of symptoms that include depression, euphoria and fatigue. An interictal phase marks the time between attacks and can involve symptoms as well.

[…]

The limbic system is a group of interconnected brain structures that process sensory information and regulate emotions.. Studies that scanned the brains of people with migraine every few days for several weeks showed that hypothalamic connectivity to various parts of the brain increases just before a migraine attack begins, then collapses during the headache phase.

May and others think that the hypothalamus loses control over the limbic system about two days before the attack begins, and it results in changes to conscious experiences that might explain symptoms such as light- and sound-sensitivity, or cognitive impairments. At the same time, the breakdown of hypothalamic control puts the body’s homeostatic balance out of kilter, which explains why symptoms such as fatigue, nausea, yawning and food cravings are common when a migraine is building up, says Krause.

Migraine researchers now talk of a hypothetical ‘migraine threshold’ in which environmental or physiological triggers tip brain activity into a dysregulated state.

Source: Nature

Images: taken from the article

The consumption of generative AI as entertainment seems like another order of psychic submission

'Being' is a digital griot who functions as a performance artist, poet, educator and healer

I quoted with approval from the first part of R.H. Lossin’s essay in e-flux on “the relationship between art, artificial intelligence, and emerging forms of hegemony.” In the second part, she puts forward an even more explicitly marxist critique, suggesting that being human involves both embodiment and emotion — something that AI can only ever imitate.

What I particularly appreciated in this second part was the focus on domination. I could have quoted more below, including one particularly juicy bit about Amazon’s Mechanical Turk, NFTs, and exploitation. You’ll just have to go and read the whole thing.

The liberal impulse to redress historic wrongs by progressively expanding the public sphere is nothing to scoff at. There couldn’t be a better time for marxists to climb down and admit the social value of including someone other than white heterosexuals in public discourse and cultural production. That said, counterhegemonic generative AI is a fantasy even if you define the diversification of therapy as counterhegemonic. In addition to causing disproportionate environmental harm, these elaborate experiments with computer subjectivity are always an exercise in labor exploitation and colonial domination. Materially, they are dependent on the maintenance and expansion of the extractive arrangements established by colonialism and the ongoing concentration of wealth and intellectual resources in the hands of very few men; ideologically they require increasing alienation and the elimination of difference. At best, these experiments offer us a pale reflection of intellectual engagement and collective social life. At worst, they contribute to the destruction of diverse communities and the very conditions for the solidarity required for real resistance.

[…]

The suggestion that a self-replicating taxonomy can produce knowledge and insights generally formulated over the course of a human life seems to defy reason. But this is exactly the claim being made by […] techno-boosterism at large: that a sophisticated enough machine can replicate the most complex human creations. This is, of course, just how machine production has always worked and evolved—each generation witnessing the disappearance of a set of skills and body of knowledge thought to be uniquely human. Art making, writing, and other highly skilled intellectual endeavors are not inherently more human, precious, or worthy of preservation than any skilled manufacture subsumed by the assembly lines of the past century. In the case of generative AI and other recent developments in machine learning, though, we are witnessing both the subsumption of cultural production by machines and the enclosure of vast swathes of subjective experience. Dramatic changes to production have always been accompanied by fundamental changes in the organization of social life beyond the workplace, but this is a qualitatively different phenomenon.

[…]

In the nineteenth century, Karl Marx observed that machinery is not just a means of production but a form of domination. In a mechanized, industrial economy, “labor appears […] as a conscious organ, scattered among the individual living workers at numerous points of the mechanical system […] as itself only a link of the system, whose unity exists not in the living workers, but rather in the living (active) machinery.” This apparent totality of machinery “confronts [the worker’s] individual, insignificant doings as a mighty organism. In machinery, objectified labor confronts living labor […] as the power which rules it.” […] Theodor Adorno and Max Horkheimer described popular entertainment as a relentless repetition of the rhythms of factory production; a way for the workplace to haunt the leisure time of the off-duty worker. The consumption of generative AI as entertainment seems like another order of psychic submission.

Source: e-flux

Image: Rashaad Newsome Studio (taken from the essay)

That’s how we got in this mess to begin with

Red heart made out of binary digits

Ben Werdmuller points to this article and says that “self-sovereignty should be available to all” because “if only wealthy people can own their own stuff, the movement is meaningless.”

If I’m understanding the arguments that PJ Onori is making (below) and Ben is making (implicitly) then they’re eliding between “owning your data” and having things “on a site you control.” I’ve got a microserver under the desk in my office. All of the data on there is “mine” in that I can physically pick it up and take it elsewhere. But… is this what we’re advocating for? It seems unrealistic.

What seems more realistic is having your stuff “on a site you control.” But what does “control” mean in this context? For most people it’s not technical control, because they won’t have the knowledge or skills. Instead, it’s power, which is the thing I think is missing from most arguments around Open Source and Free Software. The missing piece, I would argue, is creating democratic organisations such as cooperatives to give people together a way of pushing back against the combined power of Big Tech and nation states. Doing it individually is a fool’s errand.

PS The reason you’ll never hear me talk of “self-sovereignty” is mainly because of this book co-written by the father of arch-Tory Jacob Rees-Mogg.

It’s 2025. Read.cv is shutting down. WordPress is on fire. Twitter has completely melted down. Companies are changing their content policies en masse. Social networks are becoming increasingly icy towards anything outside of their walled garden. Services are using the content you post to feed proprietary LLMs. Government websites appear to be purging data. It’s a wild time.

[…]

Now, more than ever, it’s critical to own your data. Really own it. Like, on your hard drive and hosted on your website. Ideally on your own server, but one step at a time.

[…]

Is taking control of your content less convenient? Yeah–of course. That’s how we got in this mess to begin with. It can be a downright pain in the ass. But it’s your pain in the ass. And that’s the point.

Source: PJ Onori’s blog

Image: Alexander Sinn

Loose, liminal time with others used to be baked into life

silhouette photo of four people dancing on sands near shoreline

I think it says something about the state of the world that articles have to be written encouraging us to hang out with others, and indeed how to do so. But here we are.

It’s easy to live an over-scheduled life, especially if you have kids. That makes it particularly difficult to make, or encourage other people to make, unscheduled calls. But that kind of thing is the spice of life. I need more serendipity in mine, for sure.

Nowadays… unstructured moments seem fewer and farther between. Socializing nearly always revolves around a specific activity, often out of the house, and with an implied start and end time. Plans are Tetris-ed into a packed calendar and planned well in advance, leaving little room for spontaneity. Then, when we inevitably feel worn out or like our social battery’s drained, we retreat inward under the pretense of self-care; according to pop culture, true rest can only happen at home, alone, often in a bubble bath or bed.

Of course, solo veg time can be rejuvenating (and necessary), but I think we’ve lost sight of how relaxing with loved ones can also fill our cup, and make us feel less lonely. And after talking with a couple of experts on the topic, I know I’m not the only one. […]

Loose, liminal time with others used to be baked into life. It’s been slowly wedged out thanks to smartphones, go-go-go lifestyles, a fiercely individualistic society, and a host of other cultural shifts

[…]

Because there’s less pressure to perform or meet expectations, free-flowing togetherness also encourages authenticity, Dr. Stratnyer adds—and the ability to be your true self is no small thing. Social psychology researchers have found that showing up authentically in close relationships improves self-esteem; lowers levels of anxiety, depression, and stress; and is essential to building trusting, stable, satisfying relationships.

[…]

It can be as easy as saying, “Come over and let’s just hang out” or “Drop by whenever! I have no plans and would love to catch up.” When you extend invites like this, “you signal that the focus is on enjoying each other’s company rather than completing a list of activities,” Dr. Hafeez says. “With no rigid agenda, people are free to explore whatever feels right. The beauty of this kind of get-together is that things can unfold naturally, creating unforgettable memories.”

Source: SELF

Image: Javier Allegue Barros

Putting the news in its damn place

A stack of newspapers

In his most recent newsletter, Warren Ellis mentioned something that I’ve been feeling, but feeling somewhat guilty about. Namely: it’s difficult to carve out space to live a flourishing life when you spend most of your days avoiding bad news.

Yes, I’m sharing some of it here — or at least, commentaries on some of it. How could I not? My feeds feature little else but people throwing their hands in the air about democracy and/or AI. But I think think thi sis good advice from Ellis.

Thing is, not only is the news all the bloody same, all about the same country and the same handful of main characters, and every news service reports all the incremental updates to the same bloody stories every sixty seconds: but that constant battering tide of zone-flooding shit compresses time and shrinks space to think. And I want this year to feel like a year and not three bloody weeks.

It’s not about “taking a break from the news,” which various newsletters have suggested is now A Thing. And, you know, if you live in certain places right now, taking a break from the news might feel a luxury at best and a wilful ignoring of alarm bells at worst. On a single evening last week I talked to three people setting plans to bug out of the US..

It’s more about putting the news in its damn place and creating more space to live in.

Source: Orbital Operations

Image: Utsav Srestha

People think that fascism arrives in fancy dress

A group of people standing in front of a building, one is holding a sign that reads 'Burn Fascism not Fossil Fuels'

I said last week there are more historical authoritarian regimes to compare what’s happening around the world to than just Nazi Germany. I’m sick of my news feeds being full of people freaking out about what’s happening, as if this hasn’t been going on for years now.

I’m a reader of The Guardian and subscribe to the weekly print edition. But I’m finding the pointing-and-staring a little grating, which is why I appreciate this from Zoe Williams. I appreciate Carole Cadwalladr’s candid articles even more — although she does tend to post them on the Nazi-platforming Substack.

Like many people, I often feel as if I grew up with the Michael Rosen poem that starts: “I sometimes fear that / people think that fascism arrives in fancy dress.” In fact, it was written in 2014, but it was such a neat distillation that it instantly joined the canon of words that had always existed, right up there with clouds being lonely and parents fucking you up. Obviously, fascism arrives as your friend. How else would it arrive?

[…]

Between 1933 and 1939, the journalist Charlotte Beradt compiled The Third Reich of Dreams, in which she transcribed the nightmares of citizens from housemaids to small-business owners, then grouped them thematically, analysed them, and smuggled them to the US. They were published in 1968. A surprising, poignant number of them were about people dreaming that it was forbidden to dream, then freaking out in the dream because they knew they were illegitimately dreaming. There were amazingly prescient themes, of hyper-surveillance by the state before it had even begun, of barbarous violence, again, before it had started. But the paralysis theme was possibly the most recurrent and striking – people’s limbs frozen in Sieg Heils, voices frozen into silence, motifs of inaction from the most trivial to the most all-encompassing.

Source: The Guardian

Image: Mika Baumeister

⭐ Support Thought Shrapnel!

Join the Holographic Sticker Crew for a £5/month donation and keep Thought Shrapnel going. My Ko-fi page also links to ebooks and options for Critical Friend consultations 🤘

Updates (23rd Feb):

  1. Thanks to Adam Procter for becoming the first member of the crew!
  2. I’m exploring new horizons at the moment, so please let me know of any opportunities 🙂
  3. I made a thing called Album Shelf which you may like, and which I discuss in Weeknote 08/2025

Shaped into SNARF to spread

Illustration of an island in the middle of the sea

I should imagine many people who read Thought Shrapnel also read Stephen Downes' OLDaily, so may already have seen this by Jonah Peretti, CEO of BuzzFeed. What interested me was the acronym SNARF, which is as good as any for being a short way of differentiating between centralised, for-profit, highly algorithmic social networks, and their opposite.

The quotation below comes from the The Anti-SNARF Manifesto, which is linked from the sign-up page for a new social network which features an illustration of an island. That’s interesting symbolism; I wonder if it will use a protocol such as ActivityPub (which underpins Fediverse apps such as Mastodon) or ATProto (which is used by Bluesky)? It would be a bit of a ballsy move to start completely from scratch.

Given the number of boosts and favourites I’ve had on my Fediverse post asking people to add a content warning for things relating to US politics, I’d think that moderation is something which is a potential differentiator. People neither want a completely straight reverse-chronological feed, it would seem, but nor do they want to feel manipulated by an opaque algorithm. I’ll be following this with interest and I have, of course, signed up to be notified when it launches.

SNARF stands for Stakes/Novelty/Anger/Retention/Fear. SNARF is the kind of content that evolves when a platform asks an AI to maximize usage. Content creators need to please the AI algorithms or they become irrelevant. Millions of creators make SNARF content to stay in the feed and earn a living.

We are all familiar with this kind of content, especially those of us who are chronically online. Content creators exaggerate stakes to make their content urgent and existential. They manufacture novelty and spin their content as unprecedented and unique. They manipulate anger to drive engagement via outrage. They hack retention by withholding information and promising a payoff at the end of a video. And they provoke fear to make people focus with urgency on their content. Every piece of content faces ruthless Darwinian competition so only SNARF has the ability to be successful, even if it is inaccurate, hateful, fake, ethically dubious, and intellectually suspect.

This dynamic is causing many different types of content to evolve into versions of the same thing. Once you understand this you can see how much of our society, culture, and politics are downstream from big tech’s global SNARF machines. The political ideas that break through, from both Democrats and Republicans, need to be shaped into SNARF to spread. Through this lens, MAGA and “woke” are the same thing! They both are versions of political ideas that spread through raw negative emotion, outrage, and novelty. The news stories and journalism that break through aren’t the most important stories, but rather the stories that can be shaped into SNARF. This is why it seems like every election, every new technology, every global conflict has the potential to end our way of life, destroy democracy, or set off a global apocalypse! It is not a coincidence that no matter what the message is, it always takes the same form, namely memetically optimized media that maximizes stakes and novelty, provokes anger, drives retention, and instills fear. The result is an endless stream of addictive content that leaves everyone feeling depressed, scared, and dissatisfied.

[…]

But there is some hope, despite the growing revenue and usage of the big social media platforms. We are beginning to see the first cracks that suggest there might be an opportunity to fight back. A recent study by the National Bureau of Economic Research found that the majority of respondents would prefer to live in a world where TikTok and Instagram did not exist! There was generally a feeling of being compelled to use these projects because of FOMO, social pressure, and addiction. A large portion of users said they would pay money for TikTok and Instagram to not exist, suggesting these products have negative utility for many people. This challenges traditional economics which posits that consumers choosing a product means it provides positive utility. Instead, social media companies are using AI to manipulate consumer behavior for their own ends, not the benefit of the consumer. This aligns with what these researchers suspect is happening, namely that “companies introduce features that exacerbate non-user utility and diminish consumer welfare, rather than enhance it, increasing people’s need for a product without increasing the utility it delivers to them.”

Source: The Anti-SNARF Manifesto

Image: cropped from the background image on the above website

We’re hard-wired for addiction

Blurred photo of man moving head

I think what Scott Galloway is saying here is that unfettered capitalism, which allows addicting people to products detrimental to their health, is a bad thing? That seems pretty obvious.

What I think Americans are missing, to be honest, is a way of saying that they want ‘socialism’ without it being equated with ‘communism’. I lots of tortuous statements about ‘post-capitalism’ and other terms. But the rest of the world undrestands socialism as balancing government intervention for the health and flourishing of citizens as being ‘socialism’, not ‘communism’.

The world’s most valuable resource isn’t data, compute, oil, or rare earth metals; it’s dopa, i.e., the fuel of the addiction economy, which runs the most valuable companies in history. Addiction has always been a component of capitalism — nothing rivals the power of craving to manufacture demand and support irrational margins.

[…]

Historically, the most valuable companies turn dopa into consumption. Over the last 100 years, 15 of the top 30 companies by cumulative compound return have been pillars of the addiction economy. The compounders cluster in tobacco (Altria +265,528,900%), the food industrial complex (Coca-Cola +12,372,265%), pharma (Wyeth +5,702,341%), and retailers (Kroger +2,834,362%) that sell both substances and treatments. To predict which companies will be the top compounders over the next century, consider this: Eight of the world’s 10 most valuable businesses turn dopa into attention, or make picks and shovels for these dopa merchants.

[…]

Now that everyone has a cellphone, we spend 70% less time with our friends than we did a decade ago. We’re addicted to our phones, and even when we’re not seeking our fix, our phones seek us out — notifying us on average 46 times per day for adults and 237 times per day for teens. In college, I spent too much time smoking pot and watching Planet of the Apes, but when I decided to venture on campus, my bong and Cornelius didn’t send me notifications.

[…]

We’re hard-wired for addiction. We’re also wired for conflict, as competing for scarce resources has shaped our neurological system to swiftly detect, assess, and respond to threats — often before we’re aware of them. As technology advances, our wiring makes us more powerful and more vulnerable. We produce dopa monsters at internet speed. We can wage war at a velocity and scale that risks extinction in the blink of an eye.

Source: No Mercy / No Malice

Image: Mishal Ibrahim

What burns people out is not being allowed to exercise their integrity instincts

Fire

In this wind-ranging article, Venkatesh Rao discusses a number of things, including the unfolding Musk/DOGE coup. I’m ignoring that for the moment, as anything I write about it will be out of date by next week. The two parts I found most interesting from Rao’s piece were: (i) his comparison of people who tolerate inefficiency and interruption versus those who don’t, and (ii) his assertion that burnout comes from not being able to exercise integrity.

The two are related, I think. When you have to do things a particular way, subsuming your identity and values to someone else’s, it denies a core part of who you are as a person. While it’s relatively normal to self-censor to present oneself as a particular type of person, doing so in a way which is in conflict with your values is essentially a Jekyll/Hyde problem. And we all know what happened at the end of that story.

A big tell of whether you are an “open-door” type person is whether you tolerate a high degree of apparent inefficiency, interruption, and refractory periods of reflection that look like idleness. All are signs that your mental doors are open and are taking in new input. Especially dissenting input that can easily be interpreted as disloyal or traitorous by a loyalty-obsessed paranoid mind. Input that forces you to stop acting and switch to reflecting for a while.

Conversely, if you’re all about “efficiency” and a “maniacal sense of urgency” and a desperate belief that your “first principles” are all you need, you will eventually pay the price. A playbook that worked great once will stop working. Even the most powerful set of first principles that might be driving you will leave you with an exhausted paradigm and nowhere to go.

[…]

What truly burns people out is not that their boss is too demanding, hot-tempered, or even sadistic. What burns people out is not being allowed to exercise their integrity instincts. Being asked to turn off or delegate their moral compass to others. Plenty of people have the courage, the desperation, the ambition, or all three, to deal with demanding and scary bosses. But not many people can indefinitely suspend integrity instincts without being traumatized and burning out.

Source: Contraptions

Image: Danylo Suprun

All intelligence is collective intelligence

Brown mushrooms on green grass during daytime

The concept of ‘intelligence’ is a slippery one. It’s a human construct and, as such, privileges not only our own species, but those humans who at any given time have power and control over what counts as ‘intelligent’. There have been moves, especially recently, to ascribe intelligence to species that we don’t commonly eat, such as dolphins and crows.

But what about animals humans do eat? As a vegetarian I regularly feel guilty for consuming eggs and dairy; what kind of suffering am I causing sentient animals? But, I console myself, at least I don’t eat them any more.

A foolish consistency may be the hobgoblin of little minds, according to Emerson, but it is useful to have a consistent and philosophically-sound position on things. This article by Sally Adee is a pretty read, but worthwhile. It not only covers animal intelligence, but that of plants, fungi, and (of course!) machines.

A small but growing number of philosophers, physicists and developmental biologists say that, instead of continually admitting new creatures into the category of intelligence, the new findings are evidence that there is something catastrophically wrong with the way we understand intelligence itself. And they believe that if we can bring ourselves to dramatically reconsider what we think we know about it, we will end up with a much better concept of how to restabilize the balance between human and nonhuman life amid an ecological omnicrisis that threatens to permanently alter the trajectory of every living thing on Earth.

No plant, fungus or bacterium can sit an IQ test. But to be honest, neither could you if the test was administered in a culture radically different from your own. “I would probably soundly fail an intelligence test devised by an 18th-century Sioux,” the social scientist Richard Nisbett once told me. IQ tests are culturally bound, meaning that they test the ability to represent the particular world an individual inhabits and manipulate that representation in a way that maximizes the ability to thrive in it.

What would we find if we could design a test appropriate for the culture plants inhabit?

[…]

Electrophysiological readings, for example, have for a long time revealed striking similarities in the activity of humans, plants, fungi, bacteria and other organisms. It’s uncontroversially accepted that electrical signals coordinate the physical and mental activities of brain cells. We have operationalized this knowledge. When we want to peer into the mental states produced by a human brain’s 86 billion or so neurons, we eavesdrop on their cell-to-cell electrical communication (called action potentials). We have been measuring electrical activity in the brain since the electroencephalogram was invented in 1924. Analyzing the synchronized waves produced by billions of electrical firings has allowed us to deduce whether a person is asleep, dreaming or, when awake, concentrating or unfocused.

[…]

“The reality is that all intelligence is collective intelligence,” [developmental biologist Michael] Levin told me. “It’s just a matter of scale.” Human intelligence, animal swarms, bacterial biofilms — even the cells that work in concert to compose the human anatomy. “Each of us consists of a huge number of cells working together to generate a coherent cognitive being with goals, preferences and memories that belong to the whole and not to its parts.”

[…]

“We are not even individuals at all,” wrote the technologist and artist James Bridle in “Ways of Being,” a 2022 study of multiple intelligences. “Rather we are walking assemblages, riotous communities, multi-species multi-bodied beings inside and outside of our very cells.”

Bridle was referring to (among other things) the literal pounds of every human body that consists not of human cells but bacteria and fungi and other organisms, all of which play a profound role in shaping our so-called “human” intelligence.

[…]

If we can let go of the idea that the only locus of intelligence is the human brain, then we can start to conceive of ways intelligence manifests elsewhere in biology. Call it biological cognition or biological intelligence — it seems to manifest in the relationships between individuals more than in individuals themselves. […]

“The boundaries between humans and nature and humans and machines are at the very least in suspense,” wrote the philosopher Tobias Rees. Moving away from human exceptionalism, he argued, would help “to transform politics from something that is only concerned with human affairs to something that is truly planetary,” ushering in a shift from the age of the human to ‘the age of planetary reason.’”

Source: NOEMA

Image: Landon Parenteau

From cheapfakes to deepfakes

Graffiti saying 'FAKE'

I was listening on the radio to someone who was talking about AI. At first, I was skeptical of what they were saying, as it seemed to be the classic hand-waving of “machines will never be able to replace humans” without being specific. However, they did provide more specificity, mentioning how quickly we can tell, for example, if someone’s tone of voice is “I’m not really OK but I’m pretending to be.”

We spot when something isn’t right. Which is why it’s interesting to me that, while I got 10/10 on my first go on a deepfake quiz, that’s very much an outlier. I’m obviously not saying that I have some magical ability to spot what others can’t, but spending time with technologies and understanding how they work and what they look like is part of AI Literacies.

All of this reminds me of the 30,000 World War 2 volunteers who helped with the Battle of Britain by learning to spot the difference between, for example, a Messerschmitt Bf 109 and a Spitfire by listening to sound recordings, looking at silhouettes, etc.

Deepfakes have become alarmingly difficult to detect. So difficult, that only 0.1% of people today can identify them.

That’s according to iProov, a British biometric authentication firm. The company tested the public’s AI detective skills by showing 2,000 UK and US consumers a collection of both genuine and synthetic content.

[…]

Last year, a deepfake attack happened every five minutes, according to ID verification firm Onfido.

The content is frequently weaponised for fraud. A recent study estimated that AI drives almost half (43%) of all fraud attempts.

Andrew Bud, the founder and CEO of iProov, attributes the escalation to three converging trends:

  1. The rapid evolution of AI and its ability to produce realistic deepfakes

  2. The growth of Crime-as-a-Service (CaaS) networks that offer cheaper access to sophisticated, purpose-built, attack technologies

  3. The vulnerability of traditional ID verification practices

Bud also pointed to the lower barriers of entry to deepfakes. Attackers have progressed from simple “cheapfakes” to powerful tools that create convincing synthetic media within minutes.

Source: The Next Web

Image: Markus Spiske

Redefining terms like “hate speech” is obviously part of the fascist project

Image of banned words shared in Gizmodo article

The situation in the US is a slide into authoritarianism. That much is plain to see. Some people are wary of using the label ‘fascist’ perhaps because their only mental model of what’s going on is a hazy understanding of events from the 1930s in Nazi Germany.

However, there have been many authoritarian regimes that have done unspeakable harm to their people, including mass murder of populations. It starts with language, and ends with concentration camps (invented by the Spanish in Cuba, by the way). This article in Gizmodo includes a list of banned words that will get your National Science Foundation (NSF) grant funding application rejected.

For those looking for precedents of authoritarian regimes beginning their censorship efforts by targeting language immediately upon gaining power, you might want to check out this page I created with Perplexity which summarises some examples. It also gives references for further reading.

According to the Washington Post, a word like “women” appearing will get the content flagged, but it will need to be manually reviewed to determine if the context of the word is related to a forbidden topic under the anti-DEI order. Trump and his fellow fascists use terms like DEI to describe anything they don’t like, which means that the word “women” is on the forbidden list while “men” doesn’t initiate a review.

Straight white men are seen through the MAGA worldview as the default human and thus wouldn’t be suspicious and in need of a review. Any other type of identity is inherently suspect.

Some of the terms that are getting flagged are particularly eyebrow-raising in light of the Nazi salutes that Trump supporters have been giving since he took office. For example, the term “hate speech” will get a paper at NSF flagged for further review. Redefining terms like “hate speech” is obviously part of the fascist project.

Source: Gizmodo

Image: List of banned words for NSF grants shared by Ashenafi Shumey Cherkos

Capitalism would simply die if it met all of our needs, and our needs are not that hard to fill

Grayscale photo of man carrying bags of shpping while walking past a male homeless person sitting on pavement outside a Prada store

As promised, I’ve returned to e-flux with an essay from Charles Tonderai Mudede, a Zimbabwean-born cultural critic, urbanist, filmmaker, college lecturer, and writer. In it, he discusses the origins of capitalism, arguing that many have missed the point: capitalism is focused on luxury goods and their consumption, and therefore can never reach a steady state, an equilibrium where everyone’s needs are met.

It’s a long-ish read, and makes some fascinating digressions (I love the story about the tulip bulb misidentified as an onion) but what I’ve quoted below is, I think, the main points being made.

Indeed, the key to capitalist products is not their use value but their uselessness, which is why so many goods driving capitalist growth were (and are) luxuries: coffee, tea, tobacco, beef, china, spices, chocolate, single-family homes, and ultimately automobiles—which define capitalism in its American moment. It’s no accident that the richest man of our times is a car manufacturer.

[…]

Capitalism has never been about use value at all, a misreading that entered the heart of Marxism through Adam Smith’s influence on Marx’s political economy. The Dutch philosopher Bernard Mandeville’s economics, on the other hand, represents a reading of capitalism that corresponds with what I call its configuration space, in which the defining consumer products are culturally actualized compossibilities—and predetermined, like luxuries associated with vice. The reason is simple: capitalism would simply die if it met all of our needs, and our needs are not that hard to fill.

This is precisely where John Maynard Keynes made a major mistake in his remarkable and entertaining 1930 essay “Economic Possibilities for Our Grandchildren.” He assumed that capitalism’s noble project was to alleviate its own scarcity, its own uneven distribution of capital. Yes, he really thought that the objective of capitalism was capitalism’s own death. And indeed, the late nineteenth-century neoclassical economists universally believed this to be the case. They told the poor to leave capital accumulation to the specialists, as it alone could eventually eliminate all wants and satisfy all needs. It’s just a question of time. It is time that justified the concentration of capital in a few hands, the hands of those who had it and did not blow it. And this fortitude, which the poor lacked, deserved a reward. The people provided labor, which deserved a wage; the rich provided waiting, which deserved a profit. […]

What was missing in Keynes’s utopia? Even with little distinction from socialism, what was missing was the basic understanding that capitalism is not about producing the necessities of life, but about using every opportunity to transfer luxuries from the elites to the masses. This is the point of Boots Riley’s masterpiece Sorry to Bother You (2018), a film that may be called surreal by those who have no idea of the kind of culture they are in. The real is precisely the enchantment, the dream. Capitalism’s poor do not live in the woods but instead, like Sorry to Bother You’s main character, Cassius “Cash” Green (played by LaKeith Stanfield), drive beat-up or heavily indebted cars; work, in the words of the late anarchist anthropologist David Graeber, “bullshit jobs”; and sleep in vehicles made for recreation (RVs) or tents made for quick weekend breaks from urban stress, or for the lucky ones, in garages (houses for cars). This is what poverty actually looks like in a society that’s devoted to luxuries rather than necessities.

[…]

Capitalism is not, at the end of the day, based on the production of things we really need (absolute needs), for if it was, it would have already become a thing of the past. Or, in the language of thermodynamics, it would have reached equilibrium. (Indeed, the nineteenth-century British political economist John Stuart Mill called this equilibrium “a stationary state.”)

[…]

For example, an apparent shortage of housing—an absolute need or demand, meaning every human needs to be housed—could easily be solved. But what do you find everywhere in a very rich city like Seattle? No developments that come close to satisfying widespread demand for housing as an absolute need. This fact should sound an alarm in your head. We are in a system geared for relative needs. And capital’s re-enchantment is so complete that it’s hard to find a theorist who has attempted to adequately (or systemically) recognize it as such. This kind of political economy (or even anti-political economy) would find its reflection in lucid dreaming. Revolution, then, is not the end of enchantment (“the desert of the real”) but can only be re-enchantment. We are all made of dreams.

Source: e-flux

Image: Max Böhme

The occupational classification of a conversation does not necessarily mean the user was a professional in that field

Various charts showing findings from the Anthropic report

I find this report (PDF) by Anthropic, the AI company behind Claude.ai, really interesting. First, I have to note that they’ve purposely used a report style that looks like it’s been published in an academic journal. But, of course, it hasn’t, which means it’s not peer-reviewed. I’m not saying this invalidates the findings in anyway, especially as they’ve open-sourced the dataset used for the analysis.

Second, although they’ve mapped occupational categories, as the Anthropic researchers point out, “the occupational classification of a conversation does not necessarily mean the user was a professional in that field.” I’ve asked LLMs about health-related things, for example, but I am not a health professional.

Third, and maybe I’m an edge case here, but I use different LLMs for different purposes:

  • I primarily use ChatGPT for writing and brainstorming assistance, as well as converting one thing into another. For example, this morning I fed it some PDFs to extract skills frameworks as JSON.
  • I use Perplexity when searching for stuff that might take a while to find — for example, the solution a technical problem that might be on an obscure Reddit or Stack Exchange thread.
  • I turn to Google’s Gemini if I want to have a conversation with an LLM, say if I’m preparing for a presentation or an interview.
  • I use Claude for code-related things because it can create interactive artefacts which can be useful.
  • Finally, for sensitive work, or if a client specifically asks, I use Recurse.chat to interact with local LLM models such as LLaVA and Llama.

What I’m saying, I suppose, is that there’s an element of horses for courses with all of this. Increasingly, people will use different kinds of LLMs, sometimes without even realising it. If Anthropic looked at my use of Claude, they’d probably think I had some kind of programming or data analysis job. Which I don’t. So let’s take this with a grain of salt.

The following extract is taken from the report:

Here, we present a novel empirical framework for measuring AI usage across different tasks in the economy, drawing on privacy-preserving analysis of millions of real-world conversations on Claude.ai [Tamkin et al., 2024]. By mapping these conversations to occupational categories in the U.S. Department of Labor’s O*NET Database, we can identify not just current usage patterns, but also early indicators of which parts of the economy may be most affected as these technologies continue to advance.

We use this framework to make five key contributions:

1. Provide the first large-scale empirical measurement of which tasks are seeing AI use across the economy …Our analysis reveals highest use for tasks in software engineering roles (e.g., software engineers, data scientists, bioinformatics technicians), professions requiring substantial writing capabilities (e.g., technical writers, copywriters, archivists), and analytical roles (e.g., data scientists). Conversely, tasks in occupations involving physical manipulation of the environment (e.g., anesthesiologists, construction workers) currently show minimal use.

2. Quantify the depth of AI use within occupations …Only ∼ 4% of occupations exhibit AI usage for at least 75% of their tasks, suggesting the potential for deep task-level use in some roles. More broadly, ∼ 36% of occupations show usage in at least 25% of their tasks, indicating that AI has already begun to diffuse into task portfolios across a substantial portion of the workforce.

3. Measure which occupational skills are most represented in human-AI conversations ….Cognitive skills like Reading Comprehension, Writing, and Critical Thinking show high presence, while physical skills (e.g., Installation, Equipment Maintenance) and managerial skills (e.g., Negotiation) show minimal presence—reflecting clear patterns of human complementarity with current AI capabilities.

4. Analyze how wage and barrier to entry correlates with AI usage …We find that AI use peaks in the upper quartile of wages but drops off at both extremes of the wage spectrum. Most high-usage occupations clustered in the upper quartile correspond predominantly to software industry positions, while both very high-wage occupations (e.g., physicians) and low-wage positions (e.g., restaurant workers) demonstrate relatively low usage. This pattern likely reflects either limitations in current AI capabilities, the inherent physical manipulation requirements of these roles, or both. Similar patterns emerge for barriers to entry, with peak usage in occupations requiring considerable preparation (e.g., bachelor’s degree) rather than minimal or extensive training.

5. Assess whether people use Claude to automate or augment tasks …We find that 57% of interactions show augmentative patterns (e.g., back-and-forth iteration on a task) while 43% demonstrate automation-focused usage (e.g., performing the task directly). While this ratio varies across occupations, most occupations exhibited a mix of automation and augmentation across tasks, suggesting AI serves as both an efficiency tool and collaborative partner.

Source: The Anthropic Economic Index

Image: (taken from the report)

Flash fictions and creative constraints

Old blank postcard

In his most recent newsletter, Warren Ellis shares his belief that the ideal length of an email is “10 to 75 words.” He compares this with telegrams and postcards, using these as a creative constraint for what he calls ‘flash fictions’.

The average number of words on a postcard was between forty and fifty. The average number of words in a telegram was around fourteen. Last year, I started playing with flash fictions again for the first time in more than a decade. Here’s some.

[…]

The first ever time someone takes your hand, and the first thought you have is “this is everything” and the second is “what happens when it’s gone?” The space of time between those thoughts defines the shape of your life.

[…]

Flat gray post-funeral day, feeling like a human shovel as you dig into your mother’s hoarded life-debris. At the bottom of the midden of corner-shop crap, the book of her crimes. And you recognise your father’s chest tattoo covering its scabbed boards.

[…]

That point at the end of winter when your bones feel damp.

[…]

From my perspective, houses are roomy coffins with plumbing.

Source: Orbital Operations

Image: Jenny Scott

Surplus value must be distributed by and among the workers

No Capitalista - La explotación comercial de esta obra sólo está permitida a cooperativas, organizaciones y colectivos sin fines de lucro, a organizaciones de trabajadores autogestionados, y donde no existan relaciones de explotación. Todo excedente o plusvalía obtenidos por el ejercicio de los derechos concedidos por esta Licencia sobre la Obra deben ser distribuidos por y entre los trabajadores.

I’ve come across lots of different licenses in my time. Some, such as Creative Commons licenses, for example, are meant to stand up in court. Others are more of a form of artistic expression, a way of signalling to an in-group, and a ‘hands-off’ warning for those therefore considered ‘out-group’.

My Spanish is still terrible, so I used DeepL to make the translation of this ‘Non-Capitalist’ clause on the En Defensa del Software Libre website, which is used in addition to the standard Attribution and Share-Alike clauses.

Non-Capitalist - Commercial exploitation of this work is only permitted to cooperatives, non-profit organisations and collectives, self-managed workers' organisations, and where no exploitative relationships exist. Any surplus or surplus value obtained from the exercise of the rights granted by this Licence on the Work must be distributed by and among the workers.

Original Spanish:

No Capitalista - La explotación comercial de esta obra sólo está permitida a cooperativas, organizaciones y colectivos sin fines de lucro, a organizaciones de trabajadores autogestionados, y donde no existan relaciones de explotación. Todo excedente o plusvalía obtenidos por el ejercicio de los derechos concedidos por esta Licencia sobre la Obra deben ser distribuidos por y entre los trabajadores.

Source: En Defensa del Software Libre

The art of not being governed like that and at that cost

a painted sign on a wall that says question everything and smile

I haven’t yet listened to the episode of Neil Selwyn’s podcast entitled ‘What is ‘critical’ in critical studies of edtech? but I couldn’t resist reading the editorial written by Felicitas Macgilchrist in the open-access journal Learning, Technology and Society.

Macgilchrist argues that we shouldn’t take the word ‘critical’ for granted, and outlines three ways in which it can be considered. I particularly like her approach to critique of moving the conversation forward by “raising questions and troubling… previously held assumptions and convictions.”

Given what’s happening in the US at the moment, I’ve pulled out the Foucault quotation as making it difficult to be governed is absolutely how to resist authoritarianism — in any area of life.

When Latour (2004) wondered if critique had ‘run out of steam’, this led to a flurry of responses about critical scholarship today. If, he wrote, his neighbours now thoroughly debunk ‘facts’ as constructed, positioned and political, then what is his role as a critical scholar? Latour proposes in response that ‘the critic is not the one who debunks, but the one who assembles’ (2004, 246). And, in this sense, ‘assembling’ joined proposals to see critical scholarship as ‘reparative’, rather than paranoid or suspicious (Sedgwick 1997), as ‘diffraction’, creating difference patterns that make a difference (Haraway 1997, 268) or as ‘worlding’, a post-colonial critical practice of creation (Wilson 2007, 210). These generative approaches have been picked up in research on learning, media and technology, for instance, analysing open knowledge practices (Stewart 2015) or equitable data practices (Macgilchrist 2019), and most explicitly in feminist perspectives on edtech (Eynon 2018; Henry, Oliver, and Winters 2019). […]

Generative forms of critique invite us to imagine other futures, and have inspired a range of speculative work on possible futures. Futurity becomes, in these studies, less about predicting the future or joining accelerationist or transhumanist futurisms, but about estranging readers from common-sense. SF ‘isn’t about the future’ (Le Guin [1976] 2019, xxi), it’s about the present, generating ‘a shocked renewal of our vision such that once again, and as though for the first time, we are able to perceive [our contemporary cultures’ and institutions’] historicity and their arbitrariness’ (Jameson 2005, 255). […]

If critique is not fault-finding or suspicion but, as one often cited source has it, the ‘art of not being governed like that and at that cost’ (Foucault 1997, 29; Butler 2001), then the critical work outlined here aims to identify how we are currently being governed, to question how this produces the acceptable or desirable horizons of ‘good education’, ‘good teaching’ or ‘good citizens’, and to speculate on alternatives.

Source: Learning, Media and Technology

Image: Marija Zaric

⭐ Become a Thought Shrapnel supporter!

Just a quick reminder that you can become a supporter of Thought Shrapnel by clicking here. Thank you to The Other Doug and ARTiFactor for their one-off tips last week!

We're all below the AI line except for a very very very small group of wealthy white men

A neural network comes out of the top of an ivory tower, above a crowd of people's heads (shown in green to symbolise grass roots). Some of them are reaching up to try and take some control and pull the net down to them. Watercolour illustration.

As a fully paid-up member of the Audrey Watters fan club, I make no apologies for including another one of her articles in Thought Shrapnel this week. This one has much that I could dwell on, but I’m trying not to post too much about the current digital overthrow of democracy in the US at the moment.

One could also say that I could stop posting as much about AI, but then that’s all my information feeds are full of at the moment. And, anyway, it’s an interesting topic.

While you should absolutely go and read the full text, I pulled the following out of Audrey’s post, which references something that I’ve also referenced Venkatesh Rao discuss: being above or below the “API line”. These days, it’s more like an “AI line”

In 2015, an essay made the rounds (in my household at least) that argued that jobs could be classified as above or below the “API line” – above the API, you wield control, programmatically; below, however, your job is under threat of automation, your livelihood increasingly precarious. Today, a decade later, I think we’d frame this line as an “AI” not an “API line” (much to Kin’s chagrin). We’re all told – and not just programmers – that we have to cede control to AI (to “agents” and “chatbots”) in order to have any chance to stay above it. The promise isn’t that our work will be less precarious, of course; there’s been no substantive, structural shift in power, and if anything, precarity has gotten worse. AI usage is merely a psychological cushion – we’ll feel better if we can feel faster and more efficient; we’ll feel better if we can think less.

We’re all below the AI line except for a very very very small group of wealthy white men. And they truly fucking hate us.

It’s a line, it’s always a line with them: those above, and those below. “AI is an excuse that allows those with power to operate at a distance from those whom their power touches,” writes Eryk Salvaggio in “A Fork in the Road.” Intelligence, artificial or otherwise, has always been a technology of ranking and sorting and discriminating. It has always been a technology of eugenics.

Source: Second Breakfast

Image: CC-BY Jamillah Knowles & We and AI / Better Images of AI / People and Ivory Tower AI

Philosophically discontinuous times?

Collage with mirrors reflecting diverse human figures, symbolising AI data's human origin and the 'human in the loop' concept.

You should, as they say, “follow the money” when people make pronouncements. And when they’re confusing, grand-sounding, and vague, full of big words that point to a radically different future, I’d argue that you should be wary. I’ve re-read this interview with Tobias Rees several times, and I’ve concluded that what he’s saying is… bollocks.

Rees is a “founder of… an R&D studio located at the intersection of philosophy, art and technology” while also being “a senior fellow of Schmidt Sciences’ AI2050 initiative and a senior visiting fellow at Google.” Oh, and he’s a former editor of NOEMA, where this interview is published. While some of what he says sounds relatively believable, I just can’t get over this statement:

What makes AI such a profound philosophical event is that it defies many of the most fundamental, most taken-for-granted concepts — or philosophies — that have defined the modern period and that most humans still mostly live by. It literally renders them insufficient, thereby marking a deep caesura.

The idea that AI is a “profound philosophical event” should start your eyes rolling, and I’d be surprised if they haven’t rolled out of your head by the time you finish the next bit:

The human-machine distinction provided modern humans with a scaffold for how to understand themselves and the world around them. The philosophical significance of AIs — of built, technical systems that are intelligent — is that they break this scaffold.

That that means is that an epoch that was stable for almost 400 years comes — or appears to come — to an end.

Poetically put, it is a bit as if AI releases ourselves and the world from the understanding of ourselves and the world we had. It leaves us in the open.

In general, when people start arbitrarily dividing history into epochs (think “second industrial revolution,” etc.) they usually don’t know what they’re talking about. Rees manages to mention a bunch of philosophers (Karl Jaspers, Karl Marx, Martin Heidegger, etc.) but it’s a scatter-gun approach. Again, I don’t really think he knows what he’s talking about:

The alternative to being against AI is to enter AI and try to show what it could be. We need more in-between people. If my suggestion that AI is an epochal rupture is only modestly accurate, then I don’t really see what the alternative is.

What does this mean? And then, table-flipping time:

As we have elaborated in this conversation, we live in philosophically discontinuous times. The world has been outgrowing the concepts we have lived by for some time now.

We only live in “philosophically discontinuous times” if you haven’t been paying attention, and haven’t done your homework. Another reason to avoid techbro-adjacent philosophising. It’s just a waste of time.

Source: NOEMA magazine

Image: CC-BYAnne Fehres and Luke Conroy & AI4Media / Better Images of AI / Data is a Mirror of Us /

That mask is kind of coming off in all sorts of ways now

A collage that merges circuit board patterns with textile motifs in a grid-like background of alternating black, grey, and white. Two hand-drawn arms are on each side of the image, positioned as if gently pulling on thin, white strings that cross the image diagonally. The hands appear soft and somewhat translucent, contrasting with the rigid lines of the circuit board patterns behind them. The strings are woven through both the hands and the background, symbolising the connection between traditional weaving and modern technology. The overall colour palette features muted earth tones, including browns, beiges, and grays, creating a sense of both history and continuity between the natural and technological worlds.

I’d highly recommend listening to Helen Beetham’s latest podcast where she’s in conversation with Audrey Watters talking about AI. As you would expect, they eloquently critique AI as a tool of political and economic power, reinforcing right-wing authoritarianism, labour control, and racial hierarchies. However, the episode covers AI’s deep ties to military surveillance, eugenics, and Silicon Valley libertarianism, with them both arguing that it serves corporate and state interests rather than public good.

The second half of the podcast episode was my favourite, where they highlight how AI in education standardises learning, erases diversity (“the bell curve of banality”), and reinforces existing biases, particularly privileging male whiteness. The myth of AI as a ‘neutral’ or ‘liberating force’ is well and truly skwered, with them instead positioning ‘Luddism’ as a form of resistance against its exploitative tendencies.

I’ve pulled out one particular exchange from the episode which comes after Helen mentions Sam Altman’s response to DeepSeek r1 — something that has been likened to a ‘Sputnik moment’. The insight I appreciate is the comparison to crypto, which Audrey says was “almost too literal” in terms of being “too obvious of a con”.

Helen Beetham: OK. So, well, it’s kind of predictable, but I think the underlying message is really interesting. So effectively what he says is great. That’s great. They’re going to challenge us to do this at smaller scale. But we still need the build out. We absolutely need every inch of data centre we can have, and we need every piece of compute we can have because we’re going to need a lot of AI. And I think this is the moment where the mask starts to slip, you know, because it’s been clear for over a year that they’re not interested in a viable product. They don’t care whether the use cases work or not. Not except kind of rhetorically and incidentally. They don’t care if it’s valuable. They don’t care what it fucks up. They care about controlling data and compute. And it’s a great much better than crypto was. It seems to be much more effective than crypto at amassing that intentionality, that state will, that capital in one place to build out the biggest possible amount of data centres that are under the control of these corporations in alliance with these militarised states. And then at the same time, to control massive amounts of amounts of data and that is the underlying project. I feel that that mask is kind of coming off in all sorts of ways now. I could say something about how that plays out in the UK, but I’d really like to hear what you think.

Audrey Watters: Well, I think it’s interesting that the crypto stuff was almost too literal, right? Because this was about the creation of money. Like, literally we’re going to make up a new currency, and wrest power away from the traditional arbiters of money, the government. So it was almost like too nakedly literal. But with the generative AI, now we’re just making up, you know, students' essays. We’re just creating videos, and somehow it seems like a less overt power grab. I mean, I think for obviously for people in education, for people who work in creative industries, it’s an obvious power grab, but I think that it’s almost as though the cryptocurrency was too much of a con. It was too obvious of a con.

Source: imperfect offerings podcast

Image: Better Images of AI

There is no evidence that restrictive school policies are associated with overall phone and social media use or better mental wellbeing in adolescents

Child in uniform using a smartphone with an open notebook and pen on the table.

I usually find abstracts on academic papers a bit rubbish, but this ‘summary’ at the top of a research study is aces. As many people in the UK will have seen in the news over the last week, a study has shown that there’s “no evidence that restrictive school policies are associated with overall phone and social media use or better mental wellbeing in adolescents. As a result, “the findings do not provide evidence to support the use of school policies that prohibit phone use during the school day in their current form.”

This, of course, does not chime with what the public (parents, politicians, etc.) want to hear, so I imagine it will be widely ignored. In fact, when this was reported on in a radio news bulletin I heard, they immediately cut to a soundbite from a headteacher who had implemented a “no phones” policy who basically said it had worked for them. There are many problems with smartphone uses by teenagers in schools. But then there are many problems with schools.

Background: Poor mental health in adolescents can negatively affect sleep, physical activity and academic performance, and is attributed by some to increasing mobile phone use. Many countries have introduced policies to restrict phone use in schools to improve health and educational outcomes. The SMART Schools study evaluated the impact of school phone policies by comparing outcomes in adolescents who attended schools that restrict and permit phone use.

Methods: We conducted a cross-sectional observational study with adolescents from 30 English secondary schools, comprising 20 with restrictive (recreational phone use is not permitted) and 10 with permissive (recreational phone use is permitted) policies. The primary outcome was mental wellbeing (assessed using Warwick– Edinburgh Mental Well-Being Scale [WEMWBS]). Secondary outcomes included smartphone and social media time. Mixed effects linear regression models were used to explore associations between school phone policy and participant outcomes, and between phone and social media use time and participant outcomes. Study registration: ISRCTN77948572.

Findings: We recruited 1227 participants (age 12–15) across 30 schools. Mean WEMWBS score was 47 (SD = 9) with no evidence of a difference between groups (adjusted mean difference −0.48, 95% CI −2.05 to 1.06, p = 0.62). Adolescents attending schools with restrictive, compared to permissive policies had lower phone (adjusted mean difference −0.67 h, 95% CI −0.92 to −0.43, p = 0.00024) and social media time (adjusted mean difference −0.54 h, 95% CI −0.74 to −0.36, p = 0.00018) during school time, but there was no evidence for differences when comparing usage time on weekdays or weekends.

Interpretation: There is no evidence that restrictive school policies are associated with overall phone and social media use or better mental wellbeing in adolescents. The findings do not provide evidence to support the use of school policies that prohibit phone use during the school day in their current form, and indicate that these policies require further development.

Source: The Lancet

Image: True Images/Alamy (via The Guardian)

Technology is a means of spreading misinformation, not the cause of misinformation

This fine illuminated Book of Hours was produced in two stages in the second and third quarters of the fifteenth century. The manuscript contains eleven full-page miniatures and twenty historiated initials. The first stage of production includes a section attributed to the Masters of Zweder van Culemborg and the calendar (fols. 3r-14v, 52v-211v), while additional prayers illustrated in the style of the workshop of Willem Vrelant were added later in the fifteenth century (fols. 16r-50v, 213r-223r), presumably when the book was bound in its present binding. The Hours of the Virgin is for the Use of Rome. The Use of the Office of the Dead is unidentified, but the calendar is for the Use of Utrecht. The two separate parts of the manuscript were bound together in Flanders. The sections of W.168 attributed to the Masters of Zweder van Culemborg have been compared to Utrecht, Utrecht University Ms. 1037; Cambridge, Fitzwilliam Museum James Ms. 141; the second hand in New York, Pierpont Morgan Library Ms. M.87; Stockholm, Royal Library A 226, and Philadelphia, Free Library Lewis Ms. 88.

As a technologist and educator (former History teacher!) who wrote his doctoral thesis on digital literacies, this article couldn’t be any more in my sweet spot if it tried. Dr Gordon McKelvie talks about his British Academy project about misinformatoin, focusing on queens “because they were prominent enough figures to be spoken about and blamed for the country’s ills.”

Although only coined in 2013, Brandolini’s law has always been in full effect: “The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it." At least we live in a time when things can, in theory, be rebutted and debunked quickly. Back in medieval times, people would believe misinformation for years — if not their entire lives.

While a focus on the immediate problems confronting democratic states dealing with the spread of conspiracy theories is essential, we should not lose sight of the fact that misinformation was around long before the internet. If we look back into the distant past, we see the spread of conspiracy theories have been a common feature throughout human history. Technology is a means of spreading misinformation, not the cause of misinformation. […]

A key finding has been that fake news often becomes accepted historical fact. An example that illustrates this is the death of Anne Neville, wife of the infamous Richard III. We do not know the exact cause of her death, but it was probably natural causes. One contemporary source, however, claimed that the king needed to deny poisoning her in order to marry his niece. This is the only near-contemporary reference to such an event, written by the hostile Crowland chronicler . By the time Shakespeare was writing his ‘The Tragedy of Richard III’ a century later, this had become an accepted historical fact. Here, we see something that began as a piece of misinformation in the fifteenth century transformed into an accepted historical fact in the sixteenth century. […]

When we look elsewhere in medieval Europe, we see other examples of misinformation premised on existing prejudices. During the First Crusade, mistrust between the Catholic crusaders and the Greek Orthodox Byzantine Empire led to conspiracy theories that the Byzantines were colluding with Muslims against their fellow Christians. When the first wave of the Black Death hit Europe in 1348, Jews were thought to have spread the disease by poisoning wells, simply to kill Christians. In both examples, pre-existing beliefs and fears meant that misinformation and conspiracy theories flourished quickly. […]

Misinformation was a key feature of medieval politics and society. Examining the spread of fake news, or conspiracy theories, in the centuries before even the printing press, never mind the internet, helps us understand how they flourish and their appeal. […]

Historians have an important part to play in fleshing out our understanding of misinformation. We are indeed living in an age of mistrust, but certainly not the first, and almost definitely not the last.

Source: The British Academy

Image: Walters Art Museum

Once you have a 360 view, you can redirect resources to insiders and cut off the opposition

I’ve held off posting anything about what’s currently going on in the USA at the moment, as apparently it’s all very confusing even if you’re paying full attention. What did make me sit up and take notice, though, was Jason Kottke’s use of screengrab from Mad Max: Fury Road when summarising a Bluesky thread by Abe Newman about Elon Musk’s seizure of key parts of the government’s information systems.

For those who haven’t seen the film (one of my favourites, especially the Black & Chrome Edition), it’s the perfect analogy. A character by the name of Immortan Joe, a dictator in a post-apocalyptic landscape, who is revered as a god by his followers. He dominates the economy by controlling the only supply of fresh water, which he turns on from time to time, saying “Do not, my friends, become addicted to water. It will take hold of you, and you will resent its absence!" I’ve included a gif above that shows the moment from the film.

Newman links to reporting that detail that these operations are controlled by Musk: payment, personnel, and operations. But seeing them as part of a bigger strategy is important:

The first point is to make the connection. Reporting has seen these as independent ‘lock outs’ or access to specific IT systems. This seems much more a part of a coherent strategy to identify centralized information systems and control them from the top.

Newman continues:

So what are the risks. First, the panopticon. Made popular by Foucault, the idea is that if you let people know that they are being watched from a central position they are more likely to obey. E.g. emails demanding changes or workers will be added to lists…

The second is the chokepoint. If you have access to payments and data, you can shut opponents off from key resources. Sen Wyden sees this coming.

Divert to loyalists. Once you have a 360 view, you can redirect resources to insiders and cut off the opposition.

Source: Kottke.org

Clinical studies have indicated that creatine might have an antidepressant effect

One scoop of white creatine monohydrate powder

Along with about six different supplements, I add creatine to my protein smoothies every day I do exercise. Which, to be fair, is most days ending in a ‘y’. Too much of the white powder and I get angry but, as a male vegetarian, it’s important that I get some in my diet.

It turns out that creatine isn’t just good for building and maintaining muscle mass, though. It turns out that it’s also good for mental health, too — and combining it with various forms of therapy is especially beneficial.

More recently, researchers have begun to look at the broader systemic effects of creatine supplementation. Of particular interest has been the relationship between creatine and brain health. Following the discovery of endogenous creatine synthesis in the human brain, research quickly moved to understand what role this compound plays in things like cognition and mood.

Most studies linking brain benefits to creatine supplementation are either small or preliminary but there are enough clues to suggest that something positive could be going on here. For example, one oft-cited clinical trial from 2012 found creatine supplementation can effectively augment anti-depressant treatment. The trial was small (just 52 subjects, all women) but after eight weeks it found those subjects taking creatine supplements with their SSRI antidepressant were twice as likely to achieve remission from depression symptoms compared to those just taking antidepressants.

A recent article reviewing the research on creatine supplementation and depression pointed to several physiological mechanisms that could plausibly explain how this compound could improve mental health. Alongside citing several small trials that found positive results from creatine supplementation, the article concludes by stating: “Creatine is a naturally occurring organic acid that serves as an energy buffer and energy shuttle in tissues, such as brain and skeletal muscle, that exhibit dynamic energy requirements. Evidence, deriving from a variety of scientific domains, that brain bioenergetics are altered in depression and related disorders is growing. Clinical studies in neurological conditions such as PD [Parkinson’s Disease] have indicated that creatine might have an antidepressant effect, and early clinical studies in depressive disorders – especially MDD [Major Depressive Disorder] – indicate that creatine may have an important antidepressant effect.”

Source: New Atlas

Image: HowToGym

The idea that this might in any way appeal to 'newcomers' is bananas to me

Screenshots of OpenVibe

It’s hard not to agree with John Gruber’s analysis of Openvibe, an app that allows you to mash together all of the different decentralised social networks (Mastodon, Bluesky, Threads, etc.) into one timeline. He doesn’t like it, and I have never liked the idea.

That’s partly because it’s confusing, but even if you managed to provide a compelling UX, the rhetorics of interactive communication are completely different on social networks. The way people interact on one social network use different norms and approaches than others. That means different literacies are involved. I’d argue that mashing it all together only really serves people who wish to ‘broadcast’ messages to multiple places at the same time.

I really don’t see the point of mashing the tweets from two (or more!) different social networks into one unified timeline. To me it’s just confusing. I don’t love the current situation where three entirely separate, thriving social networks are worth some portion of my attention (not to mention that a fourth, X, still kinda does too). But when I use each of these platforms, I want to use a client that is dedicated to each platform. These platforms all have different features, to varying degrees, and they definitely have different vibes and cultural norms. Pretending that they all form one big (lowercase-m) meta platform doesn’t make that pretense cohesive. Mashing them all together in one timeline isn’t simpler. It sounds simpler but in practice it’s more cacophonous.

The idea that this might in any way appeal to “newcomers” is bananas to me. The concept of streaming multiple accounts from multiple networks into one timeline is by definition a bit advanced. In my experience, for very obvious reasons, casual social network users only use the first-party client. They’re confused even by the idea of using, say, an app named Ivory to access a social network called Mastodon. The idea of explaining to them why they might want to use an app named Openvibe to access Mastodon, Bluesky, and Threads (and the weirdo blockchain network Nostr) is like trying to explain to your dog why they should stay out of the trash. There’s a market for third-party clients (or at least I hope there is), but that market is not made up of “newcomers”.

Source: Daring Fireball

The inevitable cracks in a rigid software logic that enables the surprising, delightful messiness of humanity to shine through

A photo of a diagram in a book showing an algorithm

I’ve been following the development of Are.na since the early days of leading the MoodleNet project. It’s a great example of a platform that serves a particular niche of users (“connected knowledge collectors”) really well.

In this Are.na editorial, Elan Ullendorff — a designer, writer, and educator — talks about the course he teaches. In it, he helps students research and map algorithms, before writing their own, and releasing them to the world.

I write a newsletter, teach a course, and run workshops all called “escape the algorithm.” The implicit joke of the name’s particularity (not “escape algorithms” but “escape the algorithm”) is that living outside of algorithms isn’t actually possible. An algorithm is simply a set of instructions that determines a specific result. The recommendation engine that causes Spotify to encourage you to listen to certain music is a cultural sieve, but so were, in a way, the Billboard charts and radio gatekeepers that preceded it. There have always been centers of power, always been forces that exert gravitational pulls on our behavior.

The anxiety isn’t determined by the presence or absence of code. It comes from a lack of transparency and control. You are susceptible whether or not TikTok exists, whether or not you delete it. Logging off is one tool, but it will not alone cure you.

Instead of withdrawing, I encourage my students to dive deeper, engaging with platforms as if they were close reading a work of literature. In doing so, I believe that we can not only better understand a platform’s ideological premises, but also the inevitable cracks in a rigid software logic that enables the surprising, delightful messiness of humanity to shine through. And in so doing, we might move beyond the flight response towards a fight response. Or if it is a flight response, let it be a flight not just away from something, but towards something.

[…]

Resisting the paths most traveled invites us to look at the platforms we use with a critical eye, leading us to new forms of critique, making visible parts of the world and culture that are out of our view, and inspiring entirely new ways of navigating the web.

Take Andrew Norman Wilson’s ScanOps, a collection of Google Books screenshots that include the hands of low-paid Google data entry workers, or Chia Amisola’s The Sound of Love, which curates evocative comments on Youtube songs. Then there’s Riley Walz’s Bop Spotter (a commentary on ShotSpotter, gunshot detection microphones often licensed by city governments), a constantly Shazam-ing Android phone hidden on a pole in the Mission district.

Source: Are.na

Image: Андрей Сизов

⭐ Become a Thought Shrapnel supporter!

Hi everyone, Doug here. Just to let you know that it’s now possible to support Thought Shrapnel on a monthly basis!

👀 Find out more.

Don’t worry, nothing’s changing other than your ability to ensure the sustainability of this publication, receive a holographic sticker to go on your water bottle (or whatever), and have your name listed as a supporter.

We used to have around 60 supporters of Thought Shrapnel back in the day, so I hope you’ll consider becoming one of the first to get this new (rare!) holographic sticker. This is part of a little February experimentation

Description of Things and Atmosphere

Black and silver pocket knife

My daughter was complaining that, now she’s in high school, her English teacher demands more of her writing. I happened to have just read a post at Futility Closet about the notebooks of F. Scott Fitzgerald which gives examples of him coming up with vividly atmospheric descriptions of scenes. I shared it with her, so hopefully she’ll use it as inspiration.

While I’m not a fan of overly-long descriptions just for the sake of it, this writing is sublime. It makes me want to re-read The Great Gatsby.

In the light of four strong pocket flash lights, borne by four sailors in spotless white, a gentleman was shaving himself, standing clad only in athletic underwear upon the sand. Before his eyes an irreproachable valet held a silver mirror which gave back the soapy reflection of his face. To right and left stood two additional menservants, one with a dinner coat and trousers hanging from his arm and the other bearing a white stiff shirt whose studs glistened in the glow of the electric lamps. There was not a sound except the dull scrape of the razor along its wielder’s face and the intermittent groaning sound that blew in out of the sea.

Source: The Notebooks of F. Scott Fitzgerald

Image: Illia Plakhuta

Cozy comfort for gamers

A screenshot of the start of the 'Cozy Comfort' article/game

More articles about games should be games themselves, in my opinion! I loved this, and there’s a write-up of how and why it was created at here.

I spend enough times on screens, so haven’t really got into the ‘cozy’ genre, but I know that it’s a huge thing. Games that you can play on your own terms, provide a bit of escapism, are (as the article describes) proven to be as good as meditation and other forms of deep relaxation.

The gaming industry is larger than the film and music industries combined globally. A growing sector is the subgenre dubbed “cozy games.” They are marked by their relaxing nature, meant to help players unwind with challenges that are typically more constructive than destructive. Recent research explores whether this style of game, along with video games more generally, can improve mental health and quality of life.

These play-at-your-own-pace games attract both longtime gamers and newcomers. […]

There’s no hard definition for a “cozy game.” If the game gives the player a cozy, warm feeling then it fits.

[…]

These games can provide a space for people to connect in ways they may not in the real world. Suzanne Roman, who describes herself as an autistic advocate, said gaming communities can be lifelines for neurodivergent people, including her own autistic daughter who celebrated her 18th birthday in lockdown. “I think it’s just made them more confident people, who feel like they fit in socially. There’s even been relationships, of course, that have formed in the real world out of this.”

Source: Reuters

A large public domain image-text dataset to train frontier LLM models

PD12M

Yesterday, after a conversation on the #ai channel in WAO’s Slack, I published Ways of categorising ethical concerns relating to generative AI. There was some pushback on Mastodon.

Alan Levine asked if I knew of any LLMs which say they’re trained on “open data” and where you can actually see the sources. It’s a good point, and I do know of one, which is Public Domain 12M (or PD12M for short). LLMs are a class of technologies, so (as I was trying to get at in my original post) we should be clear and specific in our objections to them.

Although I don’t share the concern, I understand the position which could be broadly stated as: “I have a problem with LLM datasets being scraped from the open web without the explicit consent of copyright holders.” But that’s not a position against LLMs per se. It’s an objection based on the copyright status of the ingested data.

At 12.4 million image-caption pairs, PD12M is the largest public domain image-text dataset to date, with sufficient size to train foundation models while minimizing copyright concerns. Through the Source.Plus platform, we also introduce novel, community-driven dataset governance mechanisms that reduce harm and support reproducibility over time.

Source: Source.Plus

Strava for Stoics?

Apple Watch showing Strava app

Matt Webb is, like me, over 40 years of age. Although some would argue differently, it’s a time when you realise that your fastest days are behind you. So apps like Strava reminding you that you’re not quite as fast as you were a few years ago isn’t… particularly helpful.

There’s definitely a gap in the market for fitness apps for people who are no longer spring chickens and, although they like to challenge themselves occasionally, aren’t trying to smash it every time they go out for a run, cycle, or to the gym. I also appreciate Webb’s related point that there comes a time when reminders about things in life are just a bit painful. The opportunity not to be reminded about things would be nice.

Part of getting older is finding that my PBs each time I train up - personal bests - are not as quick as they were before. […]

I’m currently trying to increase my baseline endurance and went out for a 17 mi run a few days ago. Paused to take photos of the Thames Barrier and a rainbow, no stress. Beautiful. Felt ok when I finished – hey I made it back! Wild!

Then Strava showed me the almost identical run from exactly 5 years ago, I’d forgotten: a super steady pace, a whole minute per mile faster than this week’s tough muddle through. […]

Our quantified self apps (Strava is one) are built by people with time on their side and capacity to improve. A past achievement will almost certainly remind you of a happy day that can be visited again or, in the rear view mirror, has been joyfully surpassed. But for older people… And I’m not even that old, just past my physical peak… […]

I’m not asking Strava to hide these reminders. I’ve found peace (not as completely as I thought it turns out). But I don’t want to avoid those memories. Reminded, I do remember that time from 2020! It is a happy memory! I like to remember what I could do, even if it is slightly bittersweet! […]

And I’ll bet we’re all having that feeling a little bit, deep down, unnamed and unlocated and unconscious quite often, amplified by using these apps. So many of us are using apps that draw from the quantified self movement (Wikipedia) of the 2010s, in one way or another, and that movement was by young people. Perhaps there were considerations unaccounted for – getting older for one. There will be consequences for that, however subtle.

(Another blindspot of the under 40s: it is the most heartbreaking thing to see birthday notifications for people who are no longer with us. Please, please, surely there is a more humane way to handle this?)

So I can’t help viewing some of the present day’s fierce obsession with personal health and longevity or even brain uploads not as a healthy desire for progress but, amplified by poor design work, as an attempt to outrun death.

Source: Interconnected

Image: Tim Foster

How to Raise Your Artificial Intelligence

Three images of a street. Overlying the image are different shapes which are arranged to look like QR code symbols. These are in white/blue colours and intersect one another. The first image is clear, but the second is slightly more pixelated, and the final image is very pixelated.

This is an absolutely incredible interview of Alison Gopnik (AG) and Melanie Mitchell (MM). Gopnik is a professor of psychology and philosophy and studies children’s learning and development, while Mitchell is a professor of computer science and complexity focusing on conceptual abstraction and analogy-making in AI systems.

There’s so much insight in here, so you’ll have to forgive me quoting it at length. I urge you to go and read the whole thing. The thing that really stood out for me was Gopnik’s philosophical insights based on her experience around child development. Fascinating.

AG: There is an implicit intuitive model that everyday people (including very smart people in the tech world) have about how intelligence works: there’s this mysterious substance called intelligence, and as you have more of it, you gain power and authority. But that’s just not the picture coming out of cognitive science. Rather, there’s this very wide array of different kinds of cognitive capacities, many of which trade off against each other. So being really good at one thing actually makes you worse at something else. To echo Melanie, one of the really interesting things we’re learning about LLMs is that things like grammar, which we might have thought required an independent-model-building kind of intelligence, you can get from extracting statistical patterns in data. LLMs provide a test case for asking, What can you learn just from transmission, just from extracting information from the people around you? And what requires independent exploration and being in the world?

[…]

MM: I like to tell people that everything an LLM says is actually a hallucination. Some of the hallucinations just happen to be true because of the statistics of language and the way we use language. But a big part of what makes us intelligent is our ability to reflect on our own state. We have a sense for how confident we are about our own knowledge. This has been a big problem for LLMs. They have no calibration for how confident they are about each statement they make other than some sense of how probable that statement is in terms of the statistics of language. Without some extra ability to ground what they’re saying in the world, they can’t really know if something they’re saying is true or false.

[…]

AG: Some things that seem very intuitive and emotional, like love or caring for children, are really important parts of our intelligence. Take the famous alignment problem in computer science: How do you make sure that AI has the same goals we do? Humans have had that problem since we evolved, right? We need to get a new generation of humans to have the right kinds of goals. And we know that other humans are going to be in different environments. The niche in which we evolved was a niche where everything was changing. What do you do when you know that the environment is going to change but you want to have other members of your species that are reasonably well aligned? Caregiving is one of the things that we do to make that happen. Every time we raise a new generation of children, we’re faced with this difficulty of here are these intelligences, they’re new, they’re different, they’re in a different environment, what can we do to make sure that they have the right kinds of goals? Caregiving might actually be a really powerful metaphor for thinking about our relationship with AIs as they develop. […]

Now, it’s not like we’re in the ballpark of raising AIs as if they were humans. But thinking about that possibility gives us a way of understanding what our relationship to artificial systems might be. Often the picture is that they’re either going to be our slaves or our masters, but that doesn’t seem like the right way of thinking about it. We often ask, Are they intelligent in the way we are? There’s this kind of competition between us and the AIs. But a more sensible way of thinking about AIs is as a technological complement. It’s funny because no one is perturbed by the fact that we all have little pocket calculators that can solve problems instantly. We don’t feel threatened by that. What we typically think is, With my calculator, I’m just better at math. […]

But we still have to put a lot of work into developing norms and regulations to deal with AI systems. An example I like to give is, imagine that it was 1880 and someone said, all right, we have this thing, electricity, that we know burns things down, and I think what we should do is put it in everybody’s houses. That would have seemed like a terribly dangerous idea. And it’s true—it is a really dangerous thing. And it only works because we have a very elaborate system of regulation. There’s no question that we’ve had to do that with cultural technologies as well. When print first appeared, it was open season. There was tons of misinformation and libel and problematic things that were printed. We gradually developed ideas like newspapers and editors. I think the same thing is going to be true with AI. At the moment, AI is just generating lots of text and pictures in a pretty random way. And if we’re going to be able to use it effectively, we’re going to have to develop the kinds of norms and regulations that we developed for other technologies. But saying that it’s not the robot that’s going to come and supplant us is not to say we don’t have anything to worry about. […]

Often the metaphor for an intelligent system is one that is trying to get the most power and the most resources. So if we had an intelligent AI, that’s what it would do. But from an evolutionary point of view, that’s not what happens at all. What you see among the more intelligent systems is that they’re more cooperative, they have more social bonds. That’s what comes with having a large brain: they have a longer period of childhood and more people taking care of children. Very often, a better way of thinking about what an intelligent system does is that it tries to maintain homeostasis. It tries to keep things in a stable place where it can survive, rather than trying to get as many resources as it possibly can. Even the little brine shrimp is trying to get enough food to live and avoid predators. It’s not thinking, Can I get all of the krill in the entire ocean? That model of an intelligent system doesn’t fit with what we know about how intelligent systems work.

Source: LA Review of Books

Image: Elise Racine & The Bigger Picture

Building a quantum computer that can run reliable calculations is extremely difficult

Red and purple light digital wallpaper

Domenico Vicinanza, Associate Professor of Intelligent Systems and Data Science at Anglia Ruskin University, explains the difference between classical computing and quantum computing. The latters uses ‘qubits’ instead of bits, so instead of just being in the binary state of 0 or 1, then can be in either or both simultaneously.

Vicinanza gives the example of optimising flight paths for the 45,000+ flights, organised by 500+ airlines, using 4,000+ airports. With classical computing, this optimisation would attempted sequentially using algorithms. It would take too long. In quantum computing, every permutation can be tried at the same time.

Quantum computing deals in probabilities rather than certainties, so classical computing isn’t going away anytime soon. In fact, reading this article reminded me of using LLMs. They’re very useful, but you have to know how to use them — and you can’t necessarily take a single response at face value.

Quantum computers are incredibly powerful for solving specific problems – such as simulating the interactions between different molecules, finding the best solution from many options or dealing with encryption and decryption. However, they are not suited to every type of task.

Classical computers process one calculation at a time in a linear sequence, and they follow algorithms (sets of mathematical rules for carrying out particular computing tasks) designed for use with classical bits that are either 0 or 1. This makes them extremely predictable, robust and less prone to errors than quantum machines. For everyday computing needs such as word processing or browsing the internet, classical computers will continue to play a dominant role.

There are at least two reasons for that. The first one is practical. Building a quantum computer that can run reliable calculations is extremely difficult. The quantum world is incredibly volatile, and qubits are easily disturbed by things in their environment, such as interference from electromagnetic radiation, which makes them prone to errors.

The second reason lies in the inherent uncertainty in dealing with qubits. Because qubits are in superposition (are neither a 0 or 1) they are not as predictable as the bits used in classical computing. Physicists therefore describe qubits and their calculations in terms of probabilities. This means that the same problem, using the same quantum algorithm, run multiple times on the same quantum computer might return a different solution each time.

To address this uncertainty, quantum algorithms are typically run multiple times. The results are then analysed statistically to determine the most likely solution. This approach allows researchers to extract meaningful information from the inherently probabilistic quantum computations.

Source: The Conversation

Image: Sigmund

Playing stenographer in your little folding chair

Black folding chairs in rows

It’s hard to avoid the drama unfolding at the start of the second Trump presidential term. I don’t even know, really, what’s going on — other than a lot of confusion and emotional violence. I guess the cruelty is the point.

Anyway, Ryan Broderick of much-quoted Garbage Day fame, has some advice for journalists which is advice for us all, really: the world has changed, so it’s time to adapt. That doesn’t mean “sell out” or “abandon your ethics.” Quite the opposite.

Welcome to 2025. No one reads your website or watches your TV show. Subscription revenue will never truly replace ad revenue and ad revenue is never coming back. All of your influence is now determined by algorithms owned by tech oligarchs that stole your ad revenue and they not only hate you, personally, but have aligned themselves with a president that also hates you, personally. The information vacuum you created by selling yourself out for likes and shares and Facebook-funded pivot-to-video initiatives in the 2010s has been filled in by random “news influencers,” some of which are literally using ChatGPT to write their posts. While many others are just making shit up to go viral. And the people taking over the country currently have spent the last decade, in public, I might add, crafting a playbook — one you dismissed — that, if successful, means the end of everything that resembles America. And that includes our free and open and lazy mainstream media. And they’re pretty confident it’ll succeed because, unlike you, they know how broken the internet is now and are happy to take advantage of it. While I’m sure it feels very professional to continue playing stenographer in your little folding chair at the White House, they’re literally replacing you with podcasters as we speak. So this is it. Adapt or die. Or at the very least, die with some dignity.

Source: Garbage Day

Image: wu yi

3-column blog themes

Screenshot of garry.net

This is more of a bookmark than a post, but I’ve only just discovered the blog of Garry Newman (who some might know from Garry’s Mod). Looking at the HTML, it doesn’t look like he’s using a particular generator (e.g. WordPress) or a theme, so he must have custom-built it.

What I love about it is the logical unfolding from the meta-level navigation on the left, through the column of densely-listed posts in the middle, to the display of the individual post on the right.

If you’re reading this and know of a similar blog theme, on any platform, could you let me know?

Source: garry.net

Those who find the texture of your mind boring or offensive can close the tab

Laptop on the bed with WordPress add new post page.

In his most recent Monday Memo, Dave Gray explained how he channeled Brad Diderickson by composing his newsletter verbally while walking. That conversational style and approach is interesting and engaging, and a good way to not sound ‘stilted’ when writing. Another way is to teach yourself to touch-type, so that the words that are coming out of your brain appear on the computer screen quickly.

I’m thinking about this due to a post that I saw via Hacker News where Henrik Karlsson gives some advice for a friend who wants to start a blog. There are 19 pieces of advice, and #4 is:

People tend to sound more like themselves in chat messages than in blog posts. So perhaps write in the chat, rapidly, to a friend.

And #6:

One reason chat messages are unusually lively is that the format encourages you to write from emotion. You are talking to someone you like and you want to resonate with them, you want to make them laugh. This creates a surge in the writing. It is lovely. When you write from your head, your style sinks back under the waves.

But the best bit of advice in the list is, I think, #18:

In real life, you can’t go on and on about your obsessions; you have to tame yourself to not ruin the day for others. This is a good thing. Otherwise, we’d be ripping each other’s arms off like chimpanzees. But a blog is a tiny internet house where you decide the norms. And since there are already countless places where you can’t be yourself, there is no need to build another one of those. The law of the land is that everything you think is funny is funny. Those who find the texture of your mind boring or offensive can close the tab—no need to worry about them. It is good for the soul to have a place where being just the way you are is normal. And it is a service to others, too. You’ll be surprised how many people are laughably similar to you and who wish there was a place where they felt normal. You can build that.

If you’re reading this and don’t put your words out on the internet on a regular basis, why not change that?

Source: Escaping Flatland

Image: Justin Morgan

Prices and wages are a political matter, not an 'economic' one

Four paper card tags

Cory Doctorow is such an amazing writer and speaker. He explains reasonably complex things so concisely and straightforwardly. This is one such explainer, where he discusses how issues which are usually described as being to do with the ‘economy’ are actually to do with power.

This kind of reframing is really useful, especially for people who, like the proverbial fish swimming in water, haven’t really thought about what it means to live within capitalism.

The cost and price of a good or service is the tangible expression of power. It is a matter of politics, not economics. If consumer protection agencies demand that companies provide safe, well-manufactured goods, if there are prohibitions on price-fixing and profiteering, then value shifts from the corporation to its customers.

But if labor and consumer groups act in solidarity, then they can operate as a bloc and bosses and investors have to eat shit. Back in 2017, the pilots' union for American Airlines forced their bosses into a raise. Wall Street freaked out and tanked AA’s stock. Analysts for big banks were outraged. Citi’s Kevin Crissey summed up the situation perfectly, in a fuming memo: “This is frustrating. Labor is being paid first again. Shareholders get leftovers”:

Limiting the wealth of the investor class also limits their power, because money translates pretty directly into political power. This sets up a virtuous cycle: the less money the investor class has to spend on political projects, the more space there is for consumer- and labor-protection laws to be enacted and enforced. As labor and consumer law gets more stringent, the share of the national income going to people who make things, and people who use the things they make, goes up – and the share going to people who own things goes down.

Seen this way, it’s obvious that prices and wages are a political matter, not an “economic” one. Orthodox economists maintain the pretense that they practice a kind of physics of money, discovering the “natural,” “empirical” way that prices and wages move. They dress this up with mumbo-jumbo like the “efficient market hypothesis,” “price discovery,” “public choice,” and that old favorite, “trickle-down theory.” Strip away the doublespeak and it boils down to this: “Actually, your boss is right. He does deserve more of the value than you do”:

Even if you’ve been suckered by the lie that bosses have a legal “fiduciary duty” to maximize shareholder returns (this is a myth, by the way – no such law exists), it doesn’t follow that customers or workers share that fiduciary duty. As a customer, you are not legally obliged to arrange your affairs to maximize the dividends paid by to investors in your corporate landlord or by the merchants you patronize. As a worker, you are under no legal obligation to consider shareholders' interests when you bargain for wages, benefits and working conditions.

The “fiduciary duty” lie is another instance of politics masquerading as economics: even if bosses bargain for as big a slice of the pie as they can get, the size of that slice is determined by the relative power of bosses, customers and workers.

Source: Pluralistic

Image: Angèle Kamp

It seems there have been better times to be alive

Police van on fire during the 2024 Southport Riots

Marina Hyde reflects, in her inimical way, on the children’s commissioner’s report into why children became involved in last summer’s riots. Apparently, at least 147 were arrested and 84 charged, and almost all were boys. As she points out, it’s not exactly a great time to be a kid in the UK, is it?

Having been a teacher, and as a parent of two teenagers who survived the pandemic lockdowns, I can attest that the world looks pretty grim when they raise their heads from their phones and games controllers. Why would they bother? And what are we, as adults, doing about it?

Children might sometimes do very bad and stupid things, but they are not so stupid that they can’t see they live in a country where the gulf in opportunities is quite staggering. It’s droll to think that two months after the riots, we’d be listening to Keir Starmer’s blithe defence of his decision to take up the freebie loan of an £18m penthouse so his son could study for his GCSEs in peace and quiet. “Any parent would have made the same decision,” explained the prime minister. Any parent, if you please. I do wonder what on earth the parents of the rioting youngsters were doing making the choices they did. I would simply have let my teens spend the afternoon in an £18m penthouse instead. Anyway, speaking of guillotine-beckoning comments, perhaps it isn’t the most enormous surprise that the Channel 4 study found 47% of gen Z agreeing that “the entire way our society is organised must be radically changed through revolution”.

Again, it’s easy to dismiss, but if they believe these things, surely it’s on those of our generations who failed to make the status quo seem remotely appealing? Many of the behaviours of today’s teens and young adults are not simply thick / snowflakey / lazy, but rational responses to a world created by their elders, if not always betters. The childhood experience has deteriorated completely in the past 15 years or so. We have addicted children to – and depressed them with – smartphones, and done next to nothing about this no matter how much evidence of the most toxic harms mounts up. Children in the US are expected to tidy their rooms by generations who also expect them to rehearse active-shooter drills. We require young people to show gratitude for living in an iteration of capitalism in which they have not only no stake, but no obvious hope of getting a stake. It seems to them that there have been better times to be alive.

Source: The Guardian

Image: Wikimedia Commons

This is not the dystopia we were promised

A bunch of archictectural boxes stacked on top of each other

Discovered via John Naughton’s Memex 1.1, this article by Henry Farrell explains how we’re not living in an Orwellian or Huxleyian dystopia, but one that resembles the writing of Philip K. Dick.

The reason for this analysis? Dick “was interested in seeing how people react when their reality starts to break down” and, if we’re honest, we can see this happening everywhere. It’s easy to point to Trump and other political examples, but it’s everywhere. People exist in their own little bubbles, interacting with other people who may or not be real, via algorithms they do not control.

This is not the dystopia we were promised. We are not learning to love Big Brother, who lives, if he lives at all, on a cluster of server farms, cooled by environmentally friendly technologies. Nor have we been lulled by Soma and subliminal brain programming into a hazy acquiescence to pervasive social hierarchies.

Dystopias tend toward fantasies of absolute control, in which the system sees all, knows all, and controls all. And our world is indeed one of ubiquitous surveillance. Phones and household devices produce trails of data, like particles in a cloud chamber, indicating our wants and behaviors to companies such as Facebook, Amazon, and Google. Yet the information thus produced is imperfect and classified by machine-learning algorithms that themselves make mistakes. The efforts of these businesses to manipulate our wants leads to further complexity. It is becoming ever harder for companies to distinguish the behavior which they want to analyze from their own and others’ manipulations.

This does not look like totalitarianism unless you squint very hard indeed. As the sociologist Kieran Healy has suggested, sweeping political critiques of new technology often bear a strong family resemblance to the arguments of Silicon Valley boosters. Both assume that the technology works as advertised, which is not necessarily true at all.

Standard utopias and standard dystopias are each perfect after their own particular fashion. We live somewhere queasier—a world in which technology is developing in ways that make it increasingly hard to distinguish human beings from artificial things. The world that the Internet and social media have created is less a system than an ecology, a proliferation of unexpected niches, and entities created and adapted to exploit them in deceptive ways. Vast commercial architectures are being colonized by quasi-autonomous parasites. Scammers have built algorithms to write fake books from scratch to sell on Amazon, compiling and modifying text from other books and online sources such as Wikipedia, to fool buyers or to take advantage of loopholes in Amazon’s compensation structure. Much of the world’s financial system is made out of bots—automated systems designed to continually probe markets for fleeting arbitrage opportunities. Less sophisticated programs plague online commerce systems such as eBay and Amazon, occasionally with extraordinary consequences, as when two warring bots bid the price of a biology book up to $23,698,655.93 (plus $3.99 shipping).

In other words, we live in Philip K. Dick’s future, not George Orwell’s or Aldous Huxley’s.

[…]

In his novels Dick was interested in seeing how people react when their reality starts to break down. A world in which the real commingles with the fake, so that no one can tell where the one ends and the other begins, is ripe for paranoia. The most toxic consequence of social media manipulation, whether by the Russian government or others, may have nothing to do with its success as propaganda. Instead, it is that it sows an existential distrust. People simply do not know what or who to believe anymore. Rumors that are spread by Twitterbots merge into other rumors about the ubiquity of Twitterbots, and whether this or that trend is being driven by malign algorithms rather than real human beings.

Source: Programmable Mutter

Image: Denys Nevozhai

The struggle for attention as the prime moral challenge of our time

Rusty metal iron weathered sticker 'caution' !

I’m posting this from Andrew Curry mainly so I don’t forget the books referenced (already added to my Literal.club queue) and so that I can remember to check back on the Strother School of Radical Attention that he mentions.

Sadly, their awesome-looking courses are either in-person (US) or online at times where it’s the early hours of the morning here in the UK.

There’s something of a history now of writing about the commodification of attention as a feature of late stage capitalism. [Rhoda] Feng [writing in The American Prospect] spotlights a few. Tim Wu’s book The Attention Merchants traces this back to its origins in the late 19th century. James Williams’ Stand out of Our Light: Freedom and Resistance in the Attention Economy positions the struggle for attention as the prime moral challenge of our time. Williams had previously worked for Google. The more recent collection Scenes of Attention: Essays on Mind, Time, and the Senses “took a multidisciplinary approach”, exploring the interplay between attention and practices like pedagogy, Buddhist meditation, and therapy.

[…]

In her review, Feng critiques the framing here of attention as scarcity—essentially an economics version that goes back to Herbert Simon in the 1970s—he said, presciently, “that a wealth of information creates a poverty of attention.”

Frankly, it is more serious than that: as James Williams says in Stand Out of Our Light,

“the main risk information abundance poses is not that one’s attention will be occupied or used up by information … but rather that one will lose control over one’s attentional processes.” The issue, one might say, is less about scarcity than sovereignty.

There are pockets of resistance which are trying to reclaim our sovereignty. Feng points to Friends of Attention, new to me, whose Manifesto calls for the emancipation of attention.

Source: Just Two Things

Image: Markus Spiske

You’re Just a Row in an Excel Table

Green palm plant against blank wall

I’ve only been made redundant once in my career, but I could see it coming, prepared for it, and jumped straight into full-time consultancy. However, my weird brain still surfaces it sometimes in the early hours of the morning when I can’t get back to sleep, along with other ‘failures’ in life. (It wasn’t a failure; I didn’t ‘fail’.)

The thing is, though, that working for any hierarchical organisation, whether it’s for-profit, non-profit, or otherwise, means that you have very little power or say in how things operate. What I liked about this article was how well it explains the difference between how you enter and how you leave an organisation.

The Stoic philosophers tell us that in life you should prepare for death. I don’t think it’s unreasonable in our working lives to also prepare for endings, and do them on our own terms.

For those like me who’ve experienced layoffs, work has become just that—work. You do what’s assigned, and if your company squanders your potential or forces you to waste time on unnecessary projects, you simply stop caring. You collect your paycheck at the end of the month, and that’s it. This is the new modern work: no more striving to be 40% better every year.

[…]

I’ve wanted to write about this topic for a long time, but it’s been difficult to find the energy. The subject itself is a deep disappointment for me, and every time I reflect on layoffs, it makes me profoundly sad. It’s a stark reminder of how companies treat workers as disposable. Before you join, they go to great lengths to make you feel valued and excited to accept their offer. You meet multiple people, and some even offer signing bonuses. But when layoffs come, you’re reduced to a name on a list. During the exit interview, a random person from the company reads a prepared script and can’t answer your questions. The HR team that once worked to make you feel valued doesn’t even conduct an actual conversation with you. That random person becomes the last connection you have to a company you spent years at.

Source: Mert Bulan

Image: Mitchell Luo

Not being bored is why you always feel busy

Group of people on road near highrise buildings at nighttime. A sign reads: 'How many likes is your life worth?'

Kai Brach cites Anne Helen Petersen about cultural tipping points relating to technology use. Petersen, in turn, quotes Kate Lindsay who discusses the lack of boredom in our lives — which exhausts our brains.

In my own life I’ve found that anxiety can be absolutely paralysing, stopping me from getting done the smallest tasks. I have techniques for getting around that, some of which involve supplements, but mainly in terms of just doing stuff. That lends a kind of momentum which allows me to get things done.

However, always doing things is tiring. It takes me back to a couple of posts: Who are you without the doing? and Taking breaks to be more human.

Or perhaps, as Anne Helen Petersen suggests in her latest piece, we’ve reached a cultural tipping point:

“The amount of space these technologies take up in our lives – and their ever-diminishing utility – has brought us to a sort of cultural tipping point. [Our feeds have completed their] years-long transformation from a neighborhood populated with friends to a glossy condo development of brands.”

The spaces we once inhabited feel increasingly alien, overtaken by algorithmic ghosts and corporate voices that leave us restless, overstimulated, yet empty and disconnected.

Petersen quotes Kate Lindsay’s writing about how boredom is missing in our lives – and it’s the perfect observation:

“Boredom is when you do the things that make you feel like you have life under control. Not being bored is why you always feel busy, why you keep ‘not having time’ to take a package to the post office or work on your novel. You do have time – you just spend it on your phone. By refusing to ever let your brain rest, you are choosing to watch other people’s lives through a screen at the expense of your own.”

Source: Dense Discovery

Image: Saketh

Someplace where they promise to wear slippers to kick you to death with so it doesn’t hurt so much

Bin bag and slippers next to a door

These days, I spread my social media attention between Mastodon, Bluesky, and LinkedIn. I’m not entirely sure what I’m doing on either of the latter two, if I’m perfectly honest.

LinkedIn is a horrible place that makes me feel bad. But it’s the only real connection I’ve got to some semblance of a ‘professional’ network. Bluesky, on the other hand, just seems like a pale imitation of what Twitter used to be. I’m spending less time there recently.

As Warren Ellis suggests in the following, if you’re going to jump ship from somewhere to somewhere else, it’s probably a good idea that you’re going to be treated well, long-term.

Seeing a lot of people in my RSS announcing they’ve deleted various social media products. Usually to announce they’re on BlueSky or Substack Notes or whatever today’s one is. I am not on any of the new ones and just left the old ones by the side of the road. Some say these accounts should be deleted so you’re not part of the overall user count, but I honestly don’t care that much. And doing all that just to state you’re signing up someplace where they promise to wear slippers to kick you to death with so it doesn’t hurt so much… well, good luck.

Source: Warren Ellis

Image: Sven Brandsma

The rise of mass social platforms has been at the cost of a truly independent, truly open internet

Stormtroopers from Star Wars

Some wise words from Dan Sinker about how we need to reclaim the internet — and why.

I’ve been thinking recently about how anti-fascist writing circulated in Germany after Hitler’s rise. Called tarnschriften, or “hidden writings,” these pocket-sized essays, news updates, and how-tos were hidden inside the covers of mundane, everyday materials.

Get a few pages in to Der Kanarienvogel, “a practical handbook on the natural history, care, and breeding of the canary” and you’re no longer reading about how “the canary is one of the loveliest creatures on earth,” but instead getting the latest updates on the anti-Nazi resistance efforts of the German Communist Party.

[…]

We need to build new things in new ways independent of the oligarchs that now control the government after already controlling much of our lives.

That means moving away from the platforms that have dominated the way we’ve connected, collaborated, and disseminated information for the last couple decades. The rise of mass social platforms has been at the cost of a truly independent, truly open internet. But it’s still there. You can still build anything on it, free of platforms and the overreach of monopolists and oligarchs.

It also means reacquainting ourselves with offline connections. We’ve built for scale for so long (in our software and in our focus on swelling our own follower counts) that we’ve forgotten the power of a handful of people around a table. It’s time to stop chasing scale and start chasing the right people. Spread information table to table, person to person. 1:1 is everything right now.

And while we’re talking offline, let’s talk about making physical media again: music that can’t be taken away with a keystroke, movies that don’t involve a subscription, and news, writing, art and more that can be copied and printed and handed person-to-person—inside seed packets or not.

We have to become the media that has collapsed. Pick up the pieces and build anew. Build robust. Build independent.

Source: Dan Sinker

Image: Bryan McGowan

No breathless whispering of Mark Andreessen across some gilded dinner table

Screenshot of Readwise Reader app

I received Craig Mod’s most recent newsletter in which he referenced a previous issue from last year. In that prior issue, he talked about ‘digital reading in 2024’ in which I mostly focused on his discussion of the mobile phone-size BOOX Palma e-ink tablet.

However, he also talked about a company called Readwise which he advises. They’ve got a product, “a fabulous long form reading, meta-data-editing, article-organizing platform called Reader” which I’ve been experimenting with today. My workflow is usually based on Pocket, but it feels a bit disorganised and out-of-date in 2025. Reader has features such as the ability to highlight and import anything from the web, automatic article summaries, PDF import, all while also acting as a feed reader and somewhere you can send newsletters.

I have no affiliation, but it’s impressed me today. While Craig Mod likes his BOOX Palma, I prefer my full-size BOOX Note Air 2 e-ink tablet and Google Pixel Fold. Both are Android-based, and so both will be perfect for the Reader app. I’ll perhaps follow-up when I’ve got my workflow more set up. (It’s $9.99/month once my month-long free trial finishes, but I should be able to get 50% off as a student!)

The Readwise Reader app imports long form articles with aplomb. Parses them almost always perfectly, and paginates fabulously. It also OCRs non-insanely-typeset PDFs into device-sized typographic goodness.

[…]

Some things I adore about Readwise Reader: Solid typography, excellent pagination (seriously, I love how they paginate articles — vertically, sensibly, for easy highlighting across page boundaries), being able to double-tap on a paragraph to highlight the whole thing (much easier than fiddling with sentence highlights, and often you want paragraph context anyway), and built in “ghost reader” functions which provide LLM-based summaries (useful to quickly remember why you saved a particular article) and also LLM-based dictionary / encyclopedia definitions (which have so far been pretty good? although I’d love to be able to load my own dictionaries into the system). I also love that Reader’s web app feels like a kind of “control center” that allows for easy editing of article metadata and more. Install the Obsidian plugin, and you have a full repository of reading history and notes, in Markdown, on your local machine. Reader also has Chrome / Safari plugins that make for one-tap adding to your article Inbox. If you copy a URL and open the Reader App, it’ll automagically ask if you want to add that article to your queue. Lots of nice affordances.

[…]

Readwise, too, is an interesting company. Bootstrapped. No breathless whispering of Mark Andreessen across some gilded dinner table. Just a real company making real money by selling useful services around reading. What a thing!

Source: Roden

It’s possible that OpenAI may some day been seen as the WeWork of AI

Chat interface with a question about the yellow umbrella protests and a detailed response about China's governance.

My LinkedIn and Bluesky account has been full of pretty much two things today: the 80th International Holocaust Remembrance Day, and a new Chinese AI model called DeepSeek r1.

There have been many, many hot takes about the latter. I’m not here to do anything other than point out how awesome it is that this runs offline, is Open Source, and has been trained for 100x less than the equivalent models provided by American companies such as OpenAI, Meta, et al. I also included the image at the top because how much this has to conform to the official Chinese government ideology is, of course, one of the first thing that any self-respecting techie will want to test.

As usual, if you’re going to read someone’s opinion about all of this, Ryan Broderick is your guy. Here’s part of what he said in his newsletter Garbage Day which, if you’re not subscribing to at this point, I’m not sure what you’re doing with your life.

Now, we don’t yet know how the American AI industry will react to DeepSeek, but OpenAI’s Sam Altman announced on Saturday that free ChatGPT users are getting access to a more advanced model. Likely as a way to quickly respond to the DeepSeek hype. Meta are also frantically beefing up their own AI tools. But it’s hard to imagine how American AI companies can compete after they spent the last four years insisting that they need infinite money to buy infinite computing power to accomplish what is now open source. DeepSeek r1 can even run without an internet connection. So it’s possible that OpenAI, the biggest money sink of all, may, as cognitive scientist and AI critic Gary Marcus wrote today, “some day been seen as the WeWork of AI.” And that some day might be sooner than you think. The mood is changing fast. El Salvador’s hustle bro millennial dictator Nayib Bukele posted on X over the weekend, “So, [more than] 95% of the cost of developing new AI models is purely overhead?”

But, like TikTok, it’s doubtful that American tech oligarchs are actually capable of accepting how screwed they are because AI is not just a massive pyramid scheme to them. It has ballooned out into a psuedo-religion. And Andreessen has spent the last week frantically posting through it, doing his best impression of a doomsday evangelist trying to convince his flock that, yes, he knew that the roadmap was changing and that, yes, the promised revelation is still coming.

“A world in which human wages crash from AI — logically, necessarily — is a world in which productivity growth goes through the roof, and prices for goods and services crash to near zero,” he wrote on X, quivering in his shell. “Everything you need and want for pennies.” Everything, it seems, also includes AI.

Source: Garbage Day

Image: Alexios Mantzarlis

The jobs of the future will involve cleaning up environmental and political and epistemological disaster

THERE ARE NO JOBS ON A DEAD PLANET. Global climate change strike - No Planet B - Global Climate Strike 09-20-2019

I saw something recently which suggested that, in the US at least, the number of jobs for software developers peaked in 2019 and has been going down ever since. Good job everyone didn’t retrain as programmers, then.

There are any number of think tanks and policy outlets which tell you what they think the future of work, society, economy, etc. will be like. Of course, none of these organisations is neutral and, at the end of the day, all have a worldview to foist upon the rest of us. The World Economic Forum is one of these bodies and, as Audrey Watters discusses in her latest missive, it predicts the most ridiculous things.

I remember reading Fully Automated Luxury Communism by Aaron Bastani when it came out, pre-pandemic. I was optimistic about the role of technology, including AI, as a way of providing everyone’s needs. But the way that it’s actually being rolled-out, especially post-pandemic, when the hypercapitalists and neo-fascists have removed their masks, has left me somewhat more fearful.

It’s a broad generalisation, but you’ve essentially got two options in your working life: you can be part of the problem, or you can be part of the solution. Sadly, there’s a lot of money to be made in being part of the problem.

Reports issued by the World Economic Forum and the like are a kind of “futurology” – speculation, predictive modeling, planning for the future. “Futurology” and its version of “futurism” emerged in the mid-twentieth century as an attempt to control (and transform) the Cold War world through new kinds of knowledge production and social engineering, new technologies of knowledge production and social engineering to be more precise. (This futurism is different than the Marinetti version, the fascist version. Different-ish.) As Jenny Andersson writes in her history of post-war “future studies,” The Future of the World, these “predictive techniques rarely sought to produce objective representation of a probable future; they tried, rather, to find potential levers with which to influence human action.” These techniques, such as the Delphi method popularized by RAND, are highly technocratic — maybe even “cybernetic”? — and are deeply, deeply intertwined with not just economic forecasting, but with military scenario planning.

[…]

Futurology has always tried to sell a shiny, exciting vision for tomorrow — that is, as I argued above, what it was designed to do. But all this — all this — feels remarkably grim, despite the happy drumbeat. Without a radical adjustment to these plans for energy usage and for knowledge automation, jobs of the future seem likely entail things much less glamorous (or highly paid) than the invented work that get touted in headlines (and here again, the call for this “masculine energy” sort of shit invoked the explicitly fascist elements of futurism).

[…]

The jobs of the future will involve cleaning up environmental and political and epistemological disaster. They will involve care, for the human and more-than-human world. Of course, that’s always been the work. That’s always been the consequence, always the fallout — the caretakers of the world already know.

Source: Second Breakfast (paywalled for members)

Image: Markus Spiske

Making and remaking the instruments of our own domination

View of Refik Anadol’s Large Nature Model: Coral at the United Nations Headquarters, New York, September 21, 2024.

In this searing essay by R.H. Lossin, the first of an eventual two-parter, she takes aim at the absurdity of using generative AI for anything other than propping up the existing, dominant culture. Citing Raymond Williams' Culture and Materialism, Lossin explains that AI is the perfect tool for continually remaking cultural hegemony, for creating a normative ‘vibe’ which prevents reflection on what is really going on underneath the surface.

This is the first time, I think, that I’ve come across e-flux, which “spans numerous strains of critical discourse in art, architecture, film, and theory, and connects many of the most significant art institutions with audiences around the world.” Suffice to say, I’ve subscribed, so there will be more from this outlet featured on Thought Shrapnel over the coming weeks and months.

“Hegemony,” wrote Raymond Williams, “is the continual making and remaking of an effective dominant culture.” The concept of hegemony was used by Williams as a way to rescue culture from a reductive and one-way formulation of base and superstructure, where the base—Fordist manufacturing for example—is the cause of the superstructure or all things “merely cultural.” Rather, hegemony places literature, paintings, films, dance, television, music, and so on at the center of how a dominant culture rules or how a ruling class dominates. This is not to assert that art is propaganda for capitalism (although sometimes it is). Nor is it to revert to theories of “art for art’s sake” and the normative metaphysics of liberal cultural criticism (Art’s social value is its independence from politics. What about “beauty”? etc.). According to Williams’s theory of hegemony, art is one way of enlisting our desire in the “making and remaking” our own domination. But desire is unstable and, as an important part of maintaining a dominant culture, art is also, potentially, a means of its unmaking.

Hegemony, it should be noted, is not non-violent. It is always backed up by force, but it allows power to maintain itself without constant recourse to the police or justice system. Within the boundaries of an imperial power at least, hegemony allows ruling classes to govern with the enthusiastic consent and participation of subjects who assume that, for all of its problems, this social order is worth preserving in some form. Hegemony is most effective when it is experienced as sentiment (this movie is “fun to watch,” that immersive experience is “cool”) and understood as common sense (technology is not the problem, it is just used badly by capitalists).

[…]

As datasets continue to increase quantitatively, their fascist exclusions are concealed by the extent of their extraction, but they are no more universal than the universalism of, say, the European Enlightenment. The repetitive, homogenous output of image generators and their non-relation to distinct inputs, even the uneasy intuition that you’ve seen it somewhere already, demonstrates the extent of this exclusion. In a structure that mimics the extractive devastation required to power these screen dreams, the more data it collects the more thoroughly decimated the informational landscape becomes. Rather than the adage “garbage in, garbage out,” favored by computer scientists and statisticians, AI’s transformation of inputs into visual objects is a matter of “value in, garbage out.” Art collection in, garbage out; literature in, garbage out; apples in, garbage out; human subject in, garbage out; Indigenous lifeways in, garbage out.

We are aware of the capacity of capitalism to co-opt oppositional cultural practices. However, not everything is equally visible to the dominant gaze. Because “the internal structures” of hegemony—such as artistic production and institutional promotion—“have continually to be renewed, recreated, and defended,” writes Williams, “they can be continually challenged and in certain respects modified.” The dominant culture will always overlook certain “sources of actual human practice,” and this leaves us with what Williams calls residual and emergent practices. Practices that have escaped, momentarily, or been forgotten by this oppressive selection process; fugitive practices that offer some extant, counterhegemonic possibilities. This is precisely why the “democratic” tendency of ever-expanding datasets is disturbing rather than comforting. It is also why a defense against the oppressive expansion of generative AI needs to be sought outside of a neural network in actual social relationships.

Source: e-flux

Image: Loey Felipe (taken from the article)

Attribute substitution and human decision-making

Scrabble letters spelling out the word 'SUBSTITUTE' with the letter 'E' replaced by a blank

A few years ago, on one of my much-neglected ‘other’ blogs, I exhorted readers to sit with ambiguity for a longer than they would do normally. In that post, I focused on innovation projects. But our lack of tolerance for ambiguity is everywhere.

In this article, Adam Mastroianni discusses ‘attribute substitution’. It’s an heuristic, a shorthand way that our brains work so that we can answer easier questions rather than harder ones. Although it can lend us a bias towards action, it’s kind of the opposite of living a reflective life influenced by historical insight and philosophical analysis.

The cool thing about attribute substitution is that it makes all of human decision making possible. If someone asks you whether you would like an all-expenses-paid two-week trip to Bali, you can spend a millisecond imagining yourself sipping a mai tai on a jet ski, and go “Yes please.” Without attribute substitution, you’d have to spend two weeks picturing every moment of the trip in real time (“Hold on, I’ve only made it to the continental breakfast”). That’s why humans are the only animals who get to ride jet skis, with a few notable exceptions.

The uncool thing about attribute substitution is that it’s the main source of human folly and misery. The mind doesn’t warn you that it’s replacing a hard question with an easy one by, say, ringing a little bell; if it did, you’d hear nothing but ding-a-ling from the moment you wake up to the moment you fall back asleep. Instead, the swapping happens subconsciously, and when it goes wrong—which it often does—it leaves no trace and no explanation. It’s like magically pulling a rabbit out of a hat, except 10% of the time, the rabbit is a tarantula instead.

I think a lot of us are walking around with undiagnosed cases of attribute substitution gone awry. We routinely outsource important questions to the brain’s intern, who spends like three seconds Googling, types a few words into ChatGPT (the free version) and then is like, “Here’s that report you wanted.”

[…]

Confusion, like every emotion, is a signal: it’s the ding-a-ling that tells you to think harder because things aren’t adding up. That’s why, as soon as we unlock the ability to feel confused, we also start learning all sorts of tricks for avoiding it in the first place, lest we ding-a-ling ourselves to death. That’s what every heuristic is—a way of short-circuiting our uncertainty, of decreasing the time spent scratching our heads so we can get back to what really matters (putting car keys in our mouths).

I think it’s cool that my mind can do all these tricks, but I’m trying to get comfortable scratching my head a little longer. Being alive is strange and mysterious, and I’d like to spend some time with that fact while I’ve got the chance, to visit the jagged shoreline where the bit that I know meets the infinite that I don’t know, and to be at peace sitting there a while, accompanied by nothing but the ring of my own confusion and the crunch of delicious car keys.

Source: Experimental History

Image: Brett Jordan

Every billionaire really is a policy failure

A closeup of a US hundred dollar bill (Benjamin Franklin side).

I don’t really understand people who look at billionaires as anything other than an aberration of the system. They are not, in any way, people to be looked up to, imitated, or praised.

What probably makes it easier for me is that I see pretty much every form of hierarchical organisation-for-profit as something to be avoided. The CEO who employs downward pressure on wages, resists unionisation, and enjoys the fruits of other people’s labour, is merely different in terms of scale.

If multi-millionaires exist out of the normal cycle of everyday life, billionaires certainly do. That alone makes them spectacularly unfit to be anywhere near the levers of power, to dictate economic policy, or to make pronouncements that anyone in their right mind should listen to.

It’s a mind-bogglingly large sum of money, so let’s try to make it meaningful in day-to-day terms. If someone gave you $1,000 every single day and you didn’t spend a cent, it would take you three years to save up a million dollars. If you wanted to save a billion, you’d be waiting around 2,740 years… All this shows how the personal wealth of billionaires cannot be made through hard work alone. The accumulation of extreme wealth depends on other systems, such as exploitative labor practices, tax breaks, and loopholes that are beyond the reach of most ordinary people.

[…]

The notion that a billionaire has worked hard for every penny of their wealth is simply fanciful. The median U.S. salary is $34,612, but even if you tripled that and saved every penny for a lifetime, you still wouldn’t accumulate anywhere close to a billion dollars. Here, it’s also worth looking at Oxfam’s extensive study on extreme wealth, which found that approximately one-third of global billionaire fortunes were inherited. It’s not about working harder, smarter, or better. There are many factors built into our economic system that help extreme wealth to multiply fast. It’s a matter of being well-placed to benefit from the structures that favor capital and produce a profit off the back of exploitation.

[…]

Jeff Bezos could give every single one of his 876,000 employees a $105,000 bonus and he’d still be as rich as he was at the start of the pandemic.

[…]

It’s true that the billionaire class creates jobs and that wages have the potential to drive the economy, but that argument falters when workers barely have enough to survive. The potential to generate tax dollars from billion-dollar profits is enormous. Oxfam found that if the world’s richest 1% paid just 0.5% more in tax, we could educate all 262 million children who are currently out of school and provide health care to save the lives of 3.3 million. But given generous tax cuts and easily exploitable loopholes like the ability to register wealth in offshore tax havens, this rarely comes to pass.

[…]

Some favor the adoption of universal social security measures, paid for via progressive taxes. It’s been argued that Universal Basic Income, Guaranteed Minimum Income, and Universal Basic Services could aid prosperity in a world grappling with growing populations, societal aging, and climate breakdown. Piecemeal proposals are not enough to remedy a crisis of poverty in the midst of plenty. And a fair world would not further the acceleration of either.

Source: Teen Vogue

Image: Adam Nir

When everything is automated in an information vacuum, conspiracies abound

Man sitting on wall wearing a face mask with his arm resting on an Uber Eats delivery bag

I think it’s important to pay attention to what’s happening in the so-called “gig economy” as it’s effectively what capitalists would do to all of us if they could get away with it. In this case, The Guardian looks at couriers working for apps such as Uber Eats, Just Eat and Deliveroo.

Sure enough, the couriers have no real idea what’s going on in terms of allocation of work. So they turn to workarounds and conspiracy theories. I can’t imagine this being good for anyone’s mental health.

The couriers wonder why someone who has only just logged on gets a gig while others waiting longer are overlooked. Why, when the restaurant is busy and crying out for couriers, does the app say there are none available?

“We can never work out the algorithm,” one of the drivers says, requesting anonymity for fear of losing work. They wonder if the app ignores them if they’ve done a few jobs already that hour, and experiment with standing inside the restaurant, on the pavement or in the car park to see if subtle shifts in geolocation matter.

“It’s an absolute nightmare,” says the driver, adding that they permanently lost access to one of the platforms over a matter of a “max five minutes” wait in getting to a restaurant while he finished another job for a different app. Sometimes he gets logged out for a couple of hours because his beard has grown, confusing the facial recognition software.

“It’s not at all like being an employee,” he says. He is regularly frustrated by having to challenge what appeared to be shortfall in pay per job – sometimes just 10p, but at other times a few pounds. “There’s nobody you can talk to. Everything is automated.”

[…]

“Every worker should understand the basis on which they are paid,” [James] Farrar said [who has a lot of experience with gig economy apps]. “But you’re being gamed into deciding whether to accept a job or not. Will I get a better offer? It’s like gambling and it’s very distressing and stressful for people.

“You are completely in a vacuum about how best to do the job and because people often don’t understand how decisions are being made about their work, it encourages conspiracies.”

Source: The Guardian

Image: Sargis Chilingaryan

Monetising our own attention

Stock price chart

It has been A Week. So I’ve only just caught up Jay Springett’s weeknote from last week, in which he talks about the $TRUMP memecoin. Money hasn’t had any intrinsic value since the major currencies left the gold standard decades ago. Memecoins are like cryptocurrencies on steroids.

TRUMP Coin has sort of got lost in the noise in UK media due to the Tiktok shut down. But its way way way more insane, and way more significant news. In the last 48 hours Trump’s net worth increased by FIFTY BILLION (ILLIQUID) DOLLARS. Just days before he becomes president.

Jay quotes himself about real time attention markets and ‘economic entertainment’. It’s fascinating, especially if you read books like Clay Shirky’s Here Cones Everybody back in the day:

The rise of real time attention markets, economic entertainment, prediction markets (and the coming era of Power Fandoms) are a kind of revenge of late 90’s early 00’s Utopianism. The idea of cognitive surplus. We’re starting to see the kinds of swarm/group intelligences predicted by Shirky / Tapscott – but distorted through contemporary capitalism’s relentless logic. It took super liquid markets and meme coins for them to emerge.

He and some others have been discussing what all this means which led to a post by RM which channels the TV show Black Mirror. Even reading about this kind of stuff makes me feel about a million years old:

Personally, seeing your “value” as a volatile ticker must be truly psychologically draining. Imagine scaling that to a presidency. One day, your market cap soars; the next, an unpopular move collapses the coin. It’s like living in a Black Mirror episode where “market cap” equals self-worth and “24h volume” measures relevance.

[…]

Is this the world’s most ingenious social experiment, rewriting power, brand, and money dynamics? Or an accidental time bomb threatening presidential credibility? Unlike stocks reacting to politics, this directly monetizes an individual’s persona, allowing real-time buying and selling of reputation.

What does all of this mean in practice? I have no idea.

Source: thejaymo

Image: Maxim Hopman

Action stopping short of introducing compulsory national ID cards

Person holding black phone

It sounds like the UK government is preparing to bring in a dedicated app, initially for digital driving licenses — as is happening elsewhere in the world — but eventually for everything from tax payment to benefit claims and reminding people what their National Insurance number is.

This is a fascinating area for me, for a couple of reasons. First, the technology mentioned (“allowing users to hide their addresses in certain situations”) make me think this is very likely to be based on the Verifiable Credentials standard. This is the one that Open Badges, which I’ve been working on now for 14 years, is based.

Second, there’s a huge resistance in this country to the idea of ID cards. That means initiatives such as this can aim for the kind of utility which ID cards would provide, but have to present in such a way that is not ‘ID card-like’. Perhaps an app that focuses on providing immediate value in several area will help with this.

Third, and finally, I’m delighted that it seems that the GOV.UK team which will be behind this have decided not to go with a solution based on Google/Apple wallets. It would have been a terrible decision to do that, akin to handing over the keys to the digital kingdom to non-state actors.

The virtual wallet is understood to have security measures similar to many banking apps, and only owners of respective licences will be able to access it through inbuilt security features in smartphones, such as biometrics and multi-factor authentication.

The voluntary digital option is to be introduced later this year, according to the Times. Possible features include allowing users to hide their addresses in certain situations, such as in bars or shops, and using virtual licences for age verification at supermarket self-checkouts.

The government is said to be considering integrating other services into the app, such as tax payments, benefits claims and other forms of identification such as national insurance numbers, but will stop short of introducing compulsory national ID cards, which were pushed for by former prime minister Tony Blair and William Hague.

Source: The Guardian

Image: Robin Worrall

At least until we’re dead, education’s purpose to help us survive and thrive, not just get a job

A glass sphere on a log

Next time someone even suggests that education is merely the means of eventually finding ‘employment’ I’m just going to 301 redirect them to this magnificent rant by my extraordinarily talented colleague, Laura Hilliger.

I will be brief because some of my readers are not here for educational philosophy. For decades many in my network have championed actual education, the long-stretch goal of which is essentially self-actualisation. This is a term popularised by Maslow, but even Aristotle was pontificating about our human states of becoming. Education is, briefly, not only acquiring skills but realising our free will, potential and unique unicorn properties so that we can survive the shitshow that is existence. At least until we’re dead, education’s purpose to help us survive and thrive, not just get a job.

In society, education is both contrasted and conflated with other terms like learning, training or skill development. The field is semantically messy, and at the end of the day many don’t care about actual education. For society writ-large, the purpose of education is not self-actualisation, but rather compliance, conformance and control. I’m not talking about educators, you fluffy, beautiful bandits of resistance leaders, I’m talking about the systems around and through which people have access to education. Learning to learn, being intellectually curious, bravely looking the human condition in the face – these are not economically responsible endeavours. Thus, they have traditionally been reserved for the privileged (and the possessed).

Source: Freshly Brewed Thoughts

Image: Look Up Look Down Photography

The time to prepare is now

Repeating image of four skulls with increasing doubling, blurring, ghosting, pixelation, and horizontal glitching.

Matt Web thinks that countries need to be thinking about building a ‘strategic fact reserve’. It’s an interesting proposition but also… how has it come to this?!

[I]f I were to rank AI (not today’s AI but once it is fully developed and integrated) I’d say it’s probably not as critical as infrastructure or capacity as energy, food or an education system.

But probably it’s probably on par with GPS. Which underpins everything from logistics to automating train announcements to retail.

[…]

I think we’re all assuming that the Internet Archive will remain available as raw feedstock, that Wikipedia will remain as a trusted source of facts to steer it; that there won’t be a shift in copyright law that makes it impossible to mulch books into matrices, and that governments will allow all of this data to cross borders once AI becomes part of national security.

Everything I’ve said is super low likelihood, but the difficulty with training data is that you can’t spend your way out of the problem in the future. The time to prepare is now.

[…]

Probably the best way to start is to take a snapshot of the internet and keep it somewhere really safe. We can sift through it later; the world’s data will never be more available or less contaminated than it is today. Like when GitHub stored all public code in an Arctic vault (02/02/2020): a very-long-term archival facility 250 meters deep in the permafrost of an Arctic mountain. Or the Svalbard Global Seed Vault.

But actually I think this is a job for librarians and archivists.

Source: Interconnected

Image: Kathryn Conrad

A vector for deciding who is disposable

A bird sitting on top of a dirt hill

I grew up under a government led by Margaret Thatcher. Thatcherism was a rejection of solidarity, the welfare state, unions, and a belief in neoliberalism, austerity, and British nationalism. It an absolute breath of fresh air, therefore, when in 1997 as a 16 year old I witnessed ‘New’ Labour sweeping to victory in the General Election.

What followed was revolutionary, at least in the place I grew up: Surestart centres, investment in public services, and a real sense of togetherness throughout society. They lost power 15 years ago, and the period of Tory rule up to the middle of last year introduced Austerity 2.0, the polarisation of society, and chronic underfunding of the NHS and other essential services.

It’s surprising, therefore, that the first six months of Keir Starmer’s Labour government hasn’t felt like much of a change from the Tory status quo. Perhaps the most obvious example of this is the recent announcement that AI will be ‘mainlined into the veins’ of the UK, using rhetoric one would expect from the right wing of politics. As I read one person on social media as saying, this would have been very different had Starmer and co been seeking the support of the TUC and the Joseph Rowntree Foundation.

I’ve been listening to Helen Beetham’s new podcast in which she interviews Dan McQuillan, author of Resisting AI: An Anti-Fascist Approach to Artificial Intelligence. It’s not one of those episodes where you can be casually doing something else and half-listening, which is why I haven’t finished it yet. It has, however, prompted me to explore Dan’s blog, which is where I came across this post on ‘AI as Algorithmic Thatcherism’, written in late 2023,

It’s extraordinarily disingenous for the government to say that the move proposed is going to ‘create jobs’, as the explicit goal of ‘efficiency’ is to remove bottlenecks. Those are usually human-shaped. Maybe we should stop speedrunning towards dystopia? We need to prepare for post-capitalism; it’s just a shame that our government is doubling down on hypercapitalism.

One thing that these models definitely do, though, is transfer control to large corporations. The amount of computing power and data required is so incomprehensibly vast that very few companies in the world have the wherewithal to train them. To promote large language models anywhere is privatisation by the back door. The evidence so far suggests that this will be accompanied by extensive job losses, as employers take AI’s shoddy emulation of real tasks as an excuse to trim their workforce. The goal isn’t to “support” teachers and healthcare workers but to plug the gaps with AI instead of with the desperately needed staff and resources.

Real AI isn’t sci-fi but the precaritisation of jobs, the continued privatisation of everything and the erasure of actual social relations. AI is Thatcherism in computational form. Like Thatcher herself, real world AI boosts bureaucratic cruelty towards the most vulnerable. Case after case, from Australia to the Netherlands, has proven that unleashing machine learning in welfare systems amplifies injustice and the punishment of the poor. AI doesn’t provide insights as it’s just a giant statistical guessing game. What it does do is amplify thoughtlessness, a lack of care, and a distancing from actual consequences. The logics of ranking and superiority are buried deep in the make up of artificial intelligence; married to populist politics, it becomes another vector for deciding who is disposable.

[…]

Shouldn’t we be resisting this gigantic, carbon emitting version of automated Thatcherism before it’s allowed to trash our remaining public services? It might be tempting to wait for a Labour victory at the next election; after all, they claim to back workplace protections and the social contract. Unfortunately they aren’t likely to restrain AI; if anything, the opposite. Under the malign influence of true believers like the Tony Blair Institute, whose vision for AI is a kind of global technocratic regime change, Labour is putting its weight behind AI as an engine of regeneration. It looks like stopping the megamachine is going to be down to ordinary workers and communities. Where is Ned Ludd when you need him?

Source: danmcquillan.org

Image: Mike Newbry

The time has come now for many, many people to forge post-capitalist lives, careers, professions, and futures

Traffic cone in long grass

You may have noticed that nostalgia is, well, a vibe at the moment. Why is that? Because the present kinda sucks. Why does it suck? Because we live in completely unequal societies, increasingly ruled by demagogues.

Umair Haque, who used to be omni-present pre-pandemic on Medium seems to now have his own Ghost-powered publication and has written about post-capitalism. It’s long, with short paragraphs, and lots of italicising. But he knows what he’s talking about.

I’ve excerpted the key points, but I’d recommend clicking through and looking at the bullet point list of things he suggests reorientating one’s life and career towards. It was pretty reaffirming for me, with a January of not really enough work on, to know that getting a corporate job isn’t really a long-term solution.

The idea of late capitalism means all that. It means that people are immiserated, exploited, ruined, left desperate. That inequality soars. That there’s no future. That societies lose hope. But instead of coming together and having some kind of constructive revolution, and here we don’t have to agree with Marx, they have a fascist meltdown, which I think we can all agree is a Bad Thing.

People turn on one another. Societies shut down. Companies turn ultra-predatory. Cronyism runs rampant. Economies slide into depression. And instead of some form of positive collective action, the answer to all this tends to be conflict, and maybe even World War.

That’s late capitalism. It’s not just “this is dystopia” or “everything sucks” or even “I’m exploited to the bone.” It has that historical meaning, the very specific one: instead of doing anything positive, making wise decisions, people turn regressive, lose their thinking minds, turn on each other, and instead of the sort of class war Marx envisioned, turn to demagogues who end up starting very real ones instead.

[…]

If you’re middle aged, I’d bet that the above is already beginning to happen to you. You’re being forced out, at least if you’re in a corporate career. Every mistake isn’t just “I could lose the promotion,” it turned into “I could lose this job,” and now it’s, “that’s the end of my career, because I’ll never find another one.”

Understand that and face it. It is true. This trend of forcing middle aged people out—no matter what their accomplishments are—is here to stay now. It is never going away. This is what the “job market” is and will be for the rest of our lives, and probably beyond, because what did we learn earlier? Late capitalism recurs. It isn’t even a “stage,” as Marx’s descendants thought, but something more like a chronic condition. And we, unfortunately, have it.

[…]

The time has come now for many, many people to forge post-capitalist lives, careers, professions, and futures. They might not know it yet. Their despair and bewilderment is a reflection of how little this guiding principle is discussed, understood, or talked about. That doesn’t mean that they all have to go out and be activists or revolutionaries, lol, not at all, we just discussed how being a creator is something that’s post-capitalist.

[…]

What does it mean to “be a post-capitalist"? Many of us are starting to find out. It means running a network, community, organization, thingie, maybe a business, in certain dimensions but not along strictly profit-maximizing capitalist lines, but more humanistic ones, in a sense, and that’s not a bad thing, when you think about it.

Source: the issue.

Image: Kevin Jarrett

One of the most disconnecting forces is our expectations of how others should be

A man sitting at a table talking to a woman

Years ago, I read The Art of Travel by Alain de Botton. It was so long ago that it was the first time I’d been introduced to Seneca’s observation that you can travel, but you can’t escape yourself.

This article by Phillipa Perry — whose books How to Stay Sane and The Book You Wish Your Parents Had Read (and Your Children Will be Glad That You Did) I’d highly recommend — points out that many of our problems stem from (how we conceptualise) our relationships with others.

Often, we believe the solution to our problems lies outside ourselves, believing that if we leave the job, the relationship, everything will be fine. Of course, that can sometimes be true and it’s important to be alert to situations which are truly damaging. But the path towards feeling more connected to others usually starts from within. We must examine how we talk to ourselves, uncover the covert beliefs we live by, and confront the darker aspects of our psyche. One of the most disconnecting forces is our expectations of how others should be – but learning to accept people and things we cannot change can help us become more sanguine.

Source: The Guardian

A certain brand of artistic criticism and commentary has become surprisingly rare

A skeleton, presumably representing Death, lifting his cloak to show some people a rainbow on a screen

Good stuff from Erik Hoel about, effectively, the need for more cultural criticism around the use of technology in society. Any article that appropriately quotes Neil Postman is alright by me, and the art (included here) from Alexander Naughton which accompanies the article? Wow.

[L]ately some decisions have been explicitly boundary-pushing in a shameless “Let’s speedrun to a bad outcome” way. I think most people would share the worry that a world where social media reactivity stems mainly from bots represents a step toward dystopia, a last severing of a social life that has already moved online. So news of these sorts of plans has come across to me about as sympathetically as someone putting on their monocle and practicing their Dr. Evil laugh in public.

Why the change? Why, especially, the brazenness?

Admittedly, any answer to this question will ignore some set of contributing causal factors. Here in the early days of the AI revolution, we suddenly have a bunch of new dimensions along which to move toward a dystopia, which means people are already fiddling with the sliders. That alone accounts for some of it.

But I think a major contributing cause is a more nebulous cultural reason, one outside tech itself, in that a certain brand of artistic criticism and commentary has become surprisingly rare. In the 20th century a mainstay of satire was skewering greedy corporate overreach, a theme that cropped up across different media and genres, from film to fiction. Many older examples are, well, obvious.

Source: The Intrinsic Perspective

The feedback has to be orders of magnitude faster than the situation being controlled

A roller coaster lit up at night with red lights

Tom Watson wrote up a workshop he ran on organisational resilience recently, quoting and linking to one of Roger Swannell’s weeknotes about feedback loops. The full quotation from Swannell, taken from his blog, reads:

One of the insights I found interesting is that for feedback loops to work effectively, the feedback has to be orders of magnitude faster than the situation being controlled. So, if we’re shipping fortnightly, then the feedback would have to be hourly in order for us to have any sense of what effect we’re having. In practice, it’s usually the other way round and feedback is much slower than the situation.

Watson goes on to discuss this in terms of organisational resilience, mapping on single-loop _(Are we doing things right?__, double-loop (Are we doing the right things?), and triple-loop learning (How do we decide what’s right?) onto the “Anticipate, Prepare, Respond, Adapt” approach to organisational resilience.

Interestingly, the three things he suggests to help build organisational resilience (continual monitoring, open working, monthly reflections) are things central to our co-op:

[H]ere’s a couple of ideas to try that can help move our approach to learning forward.

  1. Make sure your monitoring and metrics allow you to answer the question “Are we doing things right?” in a timely manner. Short timeframes are generally better. Align to any decisions you need to make.

  2. Embrace open working - devote 20 -30 minutes a week to allow you and your team to reflect on what is going well, what isn’t, what is challenging, what people are seeing.

  3. Put in monthly/quarterly sessions - maybe an hour where you explore the question “Why do we do it this way?” on a specific topic as a team. Use the weeknotes to start the culture of open reflection, use them to identify common topics that might be coming up.

Doing these 3 things will move you from being only in the Response phase, into anticipate and prepare phases. Or if you prefer from single to double loop learning.

Sources: Tomcw.xyz / Roger Swannell

Image: Aleksandr Popov

Who wants to have to speak the language of search engines to find what you need?

Students at computers with screens that include a representation of a retinal scanner with pixelation and binary data overlays and a brightly coloured datawave heatmap at the top.

It’s about a decade since I gave up on Google search. While I use Google services extensively for work and other areas of my life, search and personal email are not two of them. Instead, I use DuckDuckGo and, more recently, Perplexity Pro.

The latter is excellent, bypassing advertising and paid placements, acting as a natural language search agent for synthesising information. I tend to use it for information that would take several searches. Yesterday, for example, I gave it the following query: “I need a tool that can automatically take screenshot of a web page and then stitch them together. It should then make an animated gif, scrolling through the page from top to bottom. The website requires a login, so ideally it should be a Chrome browser extension." It gave me several options, approaching my request from multiple angles as there wasn’t a solution that did exactly what I needed.

Although this article in MIT Technology Review mentions Perplexity, it weirdly focuses mainly on Google and OpenAI. There’s no mention that you can choose between LLMs in Perplexity (I use Claude 3.5 Haiku) and the two issues it raises are copyright and hallucinations, rather than sustainability and privacy. Claude 3.5 Haiku is one of the lighter weight models when it comes to environmental impact, but it still consumes a lot more energy (and water, to cool the data centres) than a single DuckDuckGo search.

And then, when it comes to privacy, while it’s great that an LLM can personalise results based on what it already knows about you, there’s an amount of trust there that I’m increasingly wary of giving to companies like OpenAI. I cancelled and then resubscribed to ChatGPT last week. I’m not sure how long I can stomach the Sam Altman circus.

Ultimately, agentic search, where you ask a question in natural language and it shows you the sources it used to synthesise the answer, is the future. Perplexity seems pretty fair in this regard, pulling in my colleague Laura’s post as part of a response about the way that technology has shifted power over the last century. For me, this kind of thing is even more of a reason to work in the open.

There’s a critical digital literacies issue here, one that’s hinted in the last paragraph of the article (included below) and discussed in Helen Beetham’s podcast episode with Dan MacQuillan, author of Resisting AI: an Anti-fascist Approach to Artificial Intelligence. When “the answer” is presented to you, there’s less incentive to do your own work in finding your own interpretation. I think that is definitely a risk. Although, given that the internet is a giant justification machine already, I’m entirely sure it will necessarily make things worse — just perhaps make people a bit lazier.

The biggest change to the way search engines have delivered information to us since the 1990s is happening right now. No more keyword searching. No more sorting through links to click. Instead, we’re entering an era of conversational search. Which means instead of keywords, you use real questions, expressed in natural language. And instead of links, you’ll increasingly be met with answers, written by generative AI and based on live information from all across the internet, delivered the same way.

More to the point, you can attempt searches that were once pretty much impossible, and get the right answer. You don’t have to be able to articulate what, precisely, you are looking for. You can describe what the bird in your yard looks like, or what the issue seems to be with your refrigerator, or that weird noise your car is making, and get an almost human explanation put together from sources previously siloed across the internet. It’s amazing, and once you start searching that way, it’s addictive.

[…]

Sure, we will always want to use search engines to navigate the web and to discover new and interesting sources of information. But the links out are taking a back seat. The way AI can put together a well-reasoned answer to just about any kind of question, drawing on real-time data from across the web, just offers a better experience. That is especially true compared with what web search has become in recent years. If it’s not exactly broken (data shows more people are searching with Google more often than ever before), it’s at the very least increasingly cluttered and daunting to navigate.

Who wants to have to speak the language of search engines to find what you need? Who wants to navigate links when you can have straight answers? And maybe: Who wants to have to learn when you can just know?

Source: MIT Technology Review

Image: Kathryn Conrad

AI slop as engagement bait

gray bucket on wooden table

A couple of months ago, I wrote a short post trying to define ‘AI slop’. It was the kind of post I write so that I can, myself, link back to something in passing as I write about related issues. It made me smile, therefore, that the (self-proclaimed) “world’s only lovable tech journalist” Mike Elgan included a link to it in a recent Computerworld article.

I’m surprised he didn’t link to the Wikipedia article on the subject, but then the reason I felt that I needed to write my post was that I didn’t feel the definitions there sufficed. I could have edited the article, but Wikipedia doesn’t include original content, and so I would have had to find a better definition instead of writing my own.

The interesting thing now is that I could potentially edit the Wikipedia article and include my definition because it’s been cited in Computerworld. But although I’ve got editing various pages of world’s largest online encyclopedia on my long-list of things to do, the reality is that I can’t be done with the politics. Especially at the moment.

According to Meta, the future of human connection is basically humans connecting with AI.

[…]

Meta treats the dystopian “Dead Internet Theory” — the belief that most online content, traffic, and user interactions are generated by AI and bots rather than humans — as a business plan instead of a toxic trend to be opposed.

[…]

All this intentional AI fakery takes place on platforms where the biggest and most harmful quality is arguably bottomless pools of spammy AI slop generated by users without content-creation help from Meta.

The genre uses bad AI-generated, often-bizarre images to elicit a knee-jerk emotional reaction and engagement.

In Facebook posts, these “engagement bait” pictures are accompanied by strange, often nonsensical, and manipulative text elements. The more “successful” posts have religious, military, political, or “general pathos” themes (sad, suffering AI children, for example).

The posts often include weird words. Posters almost always hashtag celebrity names. Many contain information about unrelated topics, like cars. Many such posts ask, “Why don’t pictures like this ever trend?”

These bizarre posts — anchored in bad AI, bad taste, and bad faith — are rife on Facebook.

You can block AI slop profiles. But they just keep coming — believe me, I tried. Blocking, reporting, criticizing, and ignoring have zero impact on the constant appearance of these posts, as far as I can tell.

Source: Computerworld

Image: pepe nero

LinkedIn has become a hellish waiting room giving off Beetlejuice vibe

The word 'LinkedIn' in white letters on a black background

I hate LinkedIn. I hate the the performativity, the way it makes me feel unworthy, as not enough. I hate the way that it promotes a particular mindset and approach to the world which does not mesh with my values.. I also hate the fact that you can’t be unduly critical of the platform itself (try it: “something is wrong, try again later” the pop-up messag insists)

This post by Amy Santee, which I discovered via a link from Matt Jukes, lists a lot of things wrong with the platform. I’ve quit LinkedIn before, but then felt like I needed to return a decade ago when becoming a consultant. And that’s the reason I stay: with the demise of Twitter, the only reliable way I can get in touch with the remnants of my professional community is through LinkedIn.

It really sucks. I appreciate what Santee suggests in terms of connecting with people via a newsletter, but that feels too broadcast-like for me. I crave community, not self-serving replies on ‘content’.

The mass tech layoffs of 2022-2024 have resulted in an explosion of people looking for work in an awful market where there just aren’t enough jobs in some fields like recruiting, product design, user research, and even engineering and game development (the latter of which are faring better).

As a result, LinkedIn has become a hellish waiting room giving off Beetlejuice vibes, where unfortunate souls are virtually required to spend inordinate amounts of time scavenging for jobs, filling out redundant applications, performing professionalism and feigning excitement in their posts, and bootlicking the companies that laid them off.

[…]

We labor and post and connect and cross our fingers in desperation, getting sucked into the noise, customizing our feeds (this makes me imagine cows at a trough), scrolling and searching and DMing, trying to beat the algorithm (just post a selfie!) or do something to stand out, all with the hopes of obtaining a prized golden ticket to participate in capitalism for a damn paycheck. We feel bad about ourselves and want to give up when someone else somehow gets a job. We may joke about our own demise. We share that we are about to become homeless or that we’re skipping meals. We express our anger at the system, and we’re more aware than ever before of other people’s suffering at the hands of this system.

[…]

The way I experience the algorithm is that it seems to randomly decide whether or not my posts are worth showing other people, or at least it feels that way because I don’t understand how it works. Linkedin definitely isn’t forthright about it. In its current form, the algorithm can be prohibitive for getting my ideas out there, having conversations, sharing my podcast episodes and blog posts, getting people to attend my events, and doing any of the stuff I used to enjoy about this place.

[…]

The execs and shareholders of LinkedIn (acquired by Microsoft in 2016) are the primary beneficiaries in all of this, and they will do anything to keep their monopolistic grip on our time, our lives, and our data (we are the product, too). This is all on purpose. LinkedIn continues to win big from the explosion in user activity, ad revenue, subscriptions, job posting fees, unpaid AI training via “Top Voice” user content, and the gobs of our data we gift them, in exchange for the displeasure of being linked in until hopefully something else comes around.

Source: The Jaw Breaker Weekly-ish

Image: Kim Menikh

Must-reads for sports fans

Composite image of Lionel Messi from an article in The Athletic

One of the things that I spend a lot of my time doing every week is watching football (soccer). Yet I don’t write about it anywhere. Whether it’s watching one of our teenagers in matches every weekend, or professionals play in stadiums or on TV, there’s a reason it’s called “the beautiful game”.

As a sucker for all things Adidas, I accrue a number of points each year in their ‘Adiclub’ members area which are of a “use it or lose it” nature. Having heard good things about The Athletic, a publication created by two former Strava employees who sold it to The New York Times in 2022, I exchanged some points for a year’s subscription. I have to say I’m already hooked.

The depth is staggering and the use of images fantastic. Here, for example, are articles on the issues around redeveloping Newcastle United’s stadium, the reason that Trent Alexander-Arnold had such a poor game against Manchester United, and a wonderful article (which I sent to my daughter) about the art of ‘scanning’ for midfield players. The use of gifs in the latter is 👌

I realise that this reads like a sponsored post, and I’m not a big fan of the editorial cowardice shown by The New York Times, but this is me just pointing out the good stuff. If you consider yourself a sports fan, I’d highly recommend getting yourself access.

Source: The Athletic

Promising Trouble's advice on UK Online Safety Act compliance

Black and purple computer keyboard

The UK’s Online Safety Act is due to come into effect soon (17th March 2025), and everyone seems to be a bit confused about it. For example, I filled in the online self-assessment tool on behalf of one of our clients, who we helped set up an online forum last year. It looks like they’re going to have to carry out an impact assessment.

Rachel Coldicutt has been doing some work, including reaching out to Ofcom, the communications regulator. The best mental model I’ve got for what she’s found is that it’s a bit like GDPR. Except people are even less aware and organised.

For volunteers and small activist organisations, it just becomes yet another layer of bureaucracy to deal with. Although “small low-risk user-to-user services” are defined as “fewer than 7 million users” I can imagine this will have a negative effect on people thinking about setting up, or continuing to run, online community groups.

Five things you need you run a small, low-risk user-to-user service This is set out in more detail on pages 2-5 of this document and can be summarised as follows.

have an individual accountable for illegal content safety duties and reporting and complaints duties a content moderation function to review and assess illegal and suspected illegal content, with swift takedown measures an easy-to-find and user complaints system and process, backed up by an appropriate process to deal with complaints and appeals, with the exception of manifestly unfounded claims easy-to-find, understandable terms and conditions the ability to remove accounts for proscribed organisations

Source: Promising Trouble

Image: 𝙂𝙧𝙚𝙜𝙤𝙧𝙮 𝙂𝙖𝙡𝙡𝙚𝙜𝙤𝙨

Bridging Dictionary

Screenshot of the Bridging Dictionary

On the one hand, I really like this new ‘Bridging Dictionary’ from MIT’s Center for Constructive Communication. On the other hand, it kind of presupposes that people on each side of the political spectrum argue in good faith and are interested in the other side’s opinion.

To be honest, it feels like the kind of website we used to see a decade ago when we we’d started to see the impact of people getting their news via algorithm-based social media feeds.

The most interesting thing for me, given that I get the majority of my news from centrist and centre-left publications, is seeing which words tend to be used by, for example, Fox News. The equivalent here in the UK, I guess, would be GB News or the Daily Mail.

Welcome to the Bridging Dictionary, a dynamic prototype from MIT’s Center for Constructive Communication that identifies how words common in American political discourse are used differently across the political divide. In addition to contrasting usage by the political left and right, the dictionary identifies some less polarizing–or bridging–alternatives.

Source: BridgingDictionary.org

A feedback loop of nonsense and violence

3D render of a red maze with a blue ball in the middle. The balls can come out of one of two exists: 'True Facts' or 'Fake News'

Unless you’ve been living under a rock for the past few days, you should by now be aware of the news that Meta products, including Facebook and Instagram, will replace teams of content moderators with ‘community notes’.

People on social media seem to think that merely linking to a bad news story and telling their network that “this is bad” is in any way a form of protest or activism. Not using stuff is protest; doing something about Meta’s influence in the world is activism.

Anyway, the best take I’ve seen on this whole thing is, unsurprisingly, from Ryan Broderick, who not only diagnoses what’s happened over the last four years, but predicts what will happen as a result. The only good thing to come of this whole debacle is that there have been some fantastic parody news stories, including this one.

[C]ontent moderation, as we’ve understood, it effectively ended on January 6th, 2021… [T]he way I look at it is that the Insurrection was the first time Americans could truly see the radicalizing effects of algorithmic platforms like Facebook and YouTube that other parts of the world, particularly the Global South, had dealt with for years. A moment of political violence Silicon Valley could no longer ignore or obfuscate the way it had with similar incidents in countries like Myanmar, India, Ethiopia, or Brazil. And once faced with the cold, hard truth of what their platforms had been facilitating, companies like Google and Meta, at least internally, accepted that they would never be able to moderate them at scale. And so they just stopped.

This explains Meta’s pivot to, first, the metaverse, which failed, and, more recently, AI, which hasn’t yet, but will. It explains YouTube’s own doomed embrace of AI and its broader transition into a Netflix competitor, rather than a platform for true user-generated content. Same with Twitter’s willingness to sell to Elon Musk, Google’s enshittification, and, relatedly, Reddit’s recent stagnant googlification. After 2021, the major tech platforms we’ve relied on since the 2010s could no longer pretend that they would ever be able to properly manage the amount of users, the amount of content, the amount of influence they “need” to exist at the size they “need” to exist at to make the amount of money they “need” to exist.

And after sleepwalking through the Biden administration and doing the bare minimum to avoid any fingers pointed their direction about election interference last year, the companies are now fully giving up. Knowing the incoming Trump administration will not only not care, but will even reward them for it.

[…]

[I]t is also safe to assume that the majority of internet users right now — both ones too young to remember a pre-moderated internet and ones too normie to have used it at the time — do not actually understand what that is going to look and feel like. But I can tell you where this is all headed, though much of this is already happening.

Under Zuckerberg’s new “censorship”-free plan, Meta’s social networks will immediately fill up with hatred and harassment. Which will make a fertile ground for terrorism and extremism. Scams and spam will clog comments and direct messages. And illicit content, like non-consensual sexual material, will proliferate in private corners of networks like group messages and private Groups. Algorithms will mindlessly spread this slop, boosted by the loudest, dumbest, most reactionary users on the platform, helping it evolve and metastasize into darker, stickier social movements. And the network will effectively break down. But Meta is betting that the average user won’t care or notice. AI profiles will like their posts, comment on them, and even make content for them. A feedback loop of nonsense and violence. Our worst, unmoderated impulses, shared by algorithm and reaffirmed by AI. Where nothing has to be true and everything is popular. A world where if Meta does inspire conspiracy theories, race riots, or insurrections, no one will actually notice. Or, at the very least, be so divided on what happened that Meta doesn’t get blamed for it again.

Source: Garbage Day

Image: Hartono Creative Studio

The internet may function not so much as a brainwashing engine but as a justification machine

Illustration of the Edison multipolar dynamo

“Do your own research” is the mantra of the conspiracy theorist. It turns out that if you search for evidence of something on the internet, you’ll find it. Want proof that the earth is flat? There’s plenty of nutjob articles, videos, and podcasts for that. As there is for almost anything you can possibly imagine.

This post by Charlie Warzel and Mike Caulfield focuses on the attack on the US Capitol four years ago for The Atlantic is based on this larger observation about the internet as a ‘justification machine’. As an historian, it makes me sad that when people refer to the “wider context” of a present-day event, they rarely go back more than a few months — or, at the most, a few years.

For example, I read a fantastic book on the history of Russia over the holiday period which really helped me understand the current invasion of Ukraine. I haven’t seen that mentioned once as part of the news cycle. It’s always on to the next thing, almost always presented through the partisan lens of some flavour of capitalism.

Lately, our independent work has coalesced around a particular shared idea: that misinformation is powerful, not because it changes minds, but because it allows people to maintain their beliefs in light of growing evidence to the contrary. The internet may function not so much as a brainwashing engine but as a justification machine. A rationale is always just a scroll or a click away, and the incentives of the modern attention economy—people are rewarded with engagement and greater influence the more their audience responds to what they’re saying—means that there will always be a rush to provide one. This dynamic plays into a natural tendency that humans have to be evidence foragers, to seek information that supports one’s beliefs or undermines the arguments against them. Finding such information (or large groups of people who eagerly propagate it) has not always been so easy. Evidence foraging might historically have meant digging into a subject, testing arguments, or relying on genuine expertise. That was the foundation on which most of our politics, culture, and arguing was built.

The current internet—a mature ecosystem with widespread access and ease of self-publishing—undoes that.

[…]

Conspiracy theorizing is a deeply ingrained human phenomenon, and January 6 is just one of many crucial moments in American history to get swept up in the paranoid style. But there is a marked difference between this insurrection (where people were presented with mountains of evidence about an event that played out on social media in real time) and, say, the assassination of John F. Kennedy (where the internet did not yet exist and people speculated about the event with relatively little information to go on). Or consider the 9/11 attacks: Some did embrace conspiracy theories similar to those that animated false-flag narratives of January 6. But the adoption of these conspiracy theories was aided not by the hyperspeed of social media but by the slower distribution of early online streaming sites, message boards, email, and torrenting; there were no centralized feeds for people to create and pull narratives from.

The justification machine, in other words, didn’t create this instinct, but it has made the process of erasing cognitive dissonance far more efficient. Our current, fractured media ecosystem works far faster and with less friction than past iterations, providing on-demand evidence for consumers that is more tailored than even the most frenzied cable news broadcasts can offer.

[…]

The justification machine thrives on the breakneck pace of our information environment; the machine is powered by the constant arrival of more news, more evidence. There’s no need to reorganize, reassess. The result is a stuckness, a feeling of being trapped in an eternal present tense.

Source: The Atlantic

Image: British Library

You will always be boring if you can't make your own choices

A hand reaching towards floating abstract shapes and spheres in various colours.

I like this post by Adam Singer as it builds on my last post about increasing one’s serendipity surface, as well as an article I published over a decade ago entitled curate or be curated. The latter covered some of the same ground as Singer’s post, riffing on the idea of the ‘filter bubble’.

Algorithms are literally everywhere in our lives these days, and coupled with AI we are likely to live templated lives. I’m currently composing this post while listening to music coming out of a speaker driven by the iPod I built a couple of years ago. I’m reading a book that I found in a second-hand bookstore. I hesitate to use the word ‘resistance’ but these are small ways in which I ensure that my world isn’t dictated by someone else’s choices for me.

We’ve never had more freedom, more choices. But in reality, most people are subtly funneled into the same streams, the same pools of ‘socially approved’ culture, cuisine and ideas. Remixes and memes abound, but almost no one shares anything weird, original or different. People wake up, perhaps with ambitions to make unique choices they believe are their own, only to find that the options have been filtered, curated, and ‘tailored to existing tastes’ by algorithms that claim to know them best. This only happens as these algorithms prioritize popularity or even just safe choices over individuality. They don’t lead you down our own path or really care what’s interesting and unknown, they lead us down paths proven profitable, efficient, safe. If you work in a creative sector (and many of us do) you already know how dangerous this is professionally, not to mention spiritually.

Algorithms might make for comfortable consumers, but they cannot produce thoughtful creators, and they are slowly taking your ability to choose from you. You might think you’re choosing, but you never really are. When your ideas, interests, and even daily meals are largely inspired by whatever was already approved, already done, already voted on and liked, you’re only experiencing life as an echo of the masses (or the machines, if personalized based on historic preference). And in this echo chamber, genuine discovery is rare, even radical.

Of course, it’s very easy to live like this, as we live in a society totally biased to pain avoidance and ease (it’s so ingrained much of the medical establishment only treats symptoms, not causes). There’s a an unconscious allure in this conformity, a feeling of belonging, of social safety, it’s a warm blanket you aren’t alone in the cosmos. But at what cost? In blending into the mainstream wasteland, you risk losing something deeply human: your impulse to explore, the courage to confront the unfamiliar, the potential to define yourself on your own terms. You don’t get real creativity without courage, and no one has this until they stop looking to the crowd for consensus approval.

Source: Hot Takes

Image: Google Deepmind

Luck = (Passionate) Doing x (Effective) Telling

Diagram illustrating the 'Surface Area of Luck' with a formula and a DOING vs. TELLING graph.

Back in 2016 I coined the term ‘serendipity surface’ which I defined as the inverse of an ‘attack surface’ when building software. In other words, you want to maximise your serendipity surface so that good and unexpected things happen to you. It’s something I discussed on the Artificiality podcast last year if you want to hear me discuss it further.

Tim Klapdor talks of a ‘serendipity engine’ and I guess Thought Shrapnel could be considered that for me. As part of my reading for this eclectic blog and newsletter, I came across this post on the Model Thinkers website on ‘The Surface Area of Luck’ which has no date, but was indexed by The Internet Archive for the first time in 2021.

There’s some good, actionable advice in it, as well as links for further exploration. It also includes the above image and, as we know, all good ideas require an image :)

Luck, by definition, is about chance, but it’s not totally out of your control. So why not use this model to increase your chance of luck?

The Surface Area of Luck, or your chance of being lucky, is equivalent to the action you take towards your passion, multiplied by the number of people you effectively communicate your passion and activities to.

Put simply: Luck = (Passionate) Doing x (Effective) Telling.

Source: Model Thinkers

We need to do a lot better than outsourcing AI education to grifters with bombastic Twitter threads

Whiteboard with handwritten text: 'ZIF_LLM_MODE' and 'get-request_body().'

This is a fantastic long post from Simon Willison about things we learned about Large Language Models (LLMs) in 2024. The bit that jumped out to me was, unsurprisingly, the AI literacies angle to all this. As Willison points out, using an LLM such as ChatGPT, Claude, or Gemini seems straightforward as it’s a chat-based interface, but even with the voice modes there’s still a need to understand what’s going on under the hood.

People often want ‘training’ on new technologies, but it’s actually quite difficult to provide in this situation. While I think there are underlying literacies involved here, a key way of understanding what’s going on is to experiment. As with every other technology, there’s no substitute for messing about with stuff to see how it works — and where the limits are.

I’d recommend also having a look at Willison’s list of ‘artifacts’ he created using Claude in a single week. It’s also worth considering the analogy he makes with building out railway infrastructure in the 19th century, as it kind of works.

A drum I’ve been banging for a while is that LLMs are power-user tools—they’re chainsaws disguised as kitchen knives. They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.

If anything, this problem got worse in 2024.

We’ve built computer systems you can talk to in human language, that will answer your questions and usually get them right! … depending on the question, and how you ask it, and whether it’s accurately reflected in the undocumented and secret training set.

[…]

What are we doing about this? Not much. Most users are thrown in at the deep end. The default LLM chat UI is like taking brand new computer users, dropping them into a Linux terminal and expecting them to figure it all out.

Meanwhile, it’s increasingly common for end users to develop wildly inaccurate mental models of how these things work and what they are capable of. I’ve seen so many examples of people trying to win an argument with a screenshot from ChatGPT—an inherently ludicrous proposition, given the inherent unreliability of these models crossed with the fact that you can get them to say anything if you prompt them right.

There’s a flipside to this too: a lot of better informed people have sworn off LLMs entirely because they can’t see how anyone could benefit from a tool with so many flaws. The key skill in getting the most out of LLMs is learning to work with tech that is both inherently unreliable and incredibly powerful at the same time. This is a decidedly non-obvious skill to acquire!

There is so much space for helpful education content here, but we need to do do a lot better than outsourcing it all to AI grifters with bombastic Twitter threads.

Source: Simon Willison’s Weblog

Image: Bernd Dittrich

It's OK not to have an opinion on everything

Three overlapping phone-shaped piece of glass in white, black, and translucent gray on a brown background.

My three-week breaks each year, usually in Spring, Summer, and Winter, are rejuvenating. One of the things I most enjoy about them is that I give myself permission to come off social media for a bit. While I’m not a user of TikTok, Instagram, Snapchat, or the like, even Mastodon or Bluesky can be an easy thing to reach for instead of doing something more interesting or useful.

There is no narrative to a social media feed. It’s just one thing after another, ordered either chronologically or algorithmically. Either isn’t great for trying to build a coherent picture of the world, especially given how emotionally-charged social media posts can be. As a former Nuzzel user, I’ve found Sill useful for avoiding FOMO. It creates a digest of the most popular links that your network is sharing, which is pretty useful.

What I’ve found myself leaning more into recently is experts making sense of the world as it happens. Two good examples of this are The Rest is Politics and The Athletic which make sense of the world of politics and football (soccer), respectively. Whether or not I agree with what the podcast host or article writer is saying, engaging with longer-form content provides much better context and helps me figure out what I think about a given situation.

Sometimes, of course, it’s OK not to have an opinion on something. This is not always understood or valued on social networks.

A 2023 study… [showed] how internet addiction causes structural changes in the brain that influence behavior and cognitive abilities. Michoel Moshel, a researcher at Macquarie University and co-author of the study, explains that compulsive content consumption — popularly known as doomscrolling — “takes advantage of our brain’s natural tendency to seek out new things, especially when it comes to potentially harmful or alarming information, a trait that once helped us survive.”

[…]

The problem, says the researcher, is that social media users are constantly exposed to rapidly changing and variable stimuli — such as Instagram notifications, WhatsApp messages, or news alerts — that have addictive potential. This means users are constantly switching their focus, which undermines their ability to concentrate effectively.

[…]

In December, psychologist Carlos Losada offered advice to EL PAÍS on how to avoid falling into the trap of doomscrolling — or, in other words, being consumed by the endless cycle of junk content amplified by algorithms. His recommendations included recognizing the problem, making a conscious effort to disconnect, and engaging in activities that require physical presence, such as meeting friends or playing sports.

Source: EL PAÍS

Image: Kelly Sikkema

The privileging of immediate, emotionally-charged, image-driven communication

Silhouette of a person holding a smartphone with the YouTube logo in front of their face.

Recently, when I met up with someone who was launching a new council website, they casually mentioned that his team had optimised it for a reading age of nine. This, apparently, is the average reading age of the UK adult population. A few years ago, my brother-in-law, who works for a church, showed me the way that they had started providing church updates in video format. YouTube and TikTok are by far the most-used apps by (western) teenagers.

Are we heading towards a post-literate society? This article by Sarah O’Connor quotes Neil Postman but I think it would be more appropriate to cite Walter Ong on secondary orality, a kind of orality that depends on literate culture and the existence of writing. For example, the updates provided by my brother-in-law’s church depend on there being a script, written updates to share with the congregation, and a programme of events to which they can refer.

Technological shifts reshape how we perceive and process information, and — as I mentioned in a recent post — we live in a world which privileges immediate, emotionally-charged, image-driven communication over slower, deliberate reflections. It’s a difficult thing to resist or change, because like fast-food it’s something which appeals to something innate.

(In passing, I would point out that the literacy proficiency of 16-24 year olds in England is probably due to the introduction of a phonics-based approach in early years, and ensuring young people remain in education or training up to the age of 18)

The implications for politics and the quality of public debate are already evident. These, too, were foreseen. In 2007, writer Caleb Crain wrote an article called Twilight of the Books in the New Yorker magazine about what a possible post-literate culture might look like. In oral cultures, he wrote, cliche and stereotype are valued, conflict and name-calling are prized because they are memorable, and speakers tend not to correct themselves because “it is only in a literate culture that the past’s inconsistencies have to be accounted for”. Does that sound familiar?

[…]

These trends are not unavoidable or irreversible. Finland demonstrates the potential for high-quality education and strong social norms to sustain a highly literate population, even in a world where TikTok exists. England shows the difference that improved schooling can make: there, the literacy proficiency of 16-24 year olds was significantly better than a decade ago.

The question of whether AI could alleviate or exacerbate the problem is more tricky. Systems like ChatGPT can perform well on many reading and writing tasks: they can parse reams of information and reduce it to summaries.

[…]

But, as [David] Autor [an economics professor at MIT] says, in order to make good use of a tool to “level up” your skills, you need a decent foundation to begin with. Absent that, [Andreas] Schleicher [director for education and skills at the OECD] worries that people with poor literacy skills will become “naive consumers of prefabricated content”.

Source: The Financial Times

Image: Rachit Tank

Resisting the Now Show

Person lying on a sofa reading a book in a dimly lit room.

Just before Christmas, I headed up to Barter Books with my family. It’s a great place where you can exchange books you no longer need for credit, which you can then spend on books that other people have brought in. I picked up Russia: A 1000-Year Chronicle of the Wild East, a big thick history book by former BBC correspondent, Martin Sixsmith.

I finished it this morning; it was a fantastic read. Sixsmith serialised the book on BBC Radio 4 so it’s easier to follow than the usual history book, but still has plenty of Russian names and places for the reader to wrap their head around.

As Audrey Watters notes, reading can often be hard work. It’s tempting to want to read the summary, to optimise your information environment such that you can get on with the important stuff. Where “the important stuff” is, presumably, making money, arguing on the internet, or attempting to turn a lack of empathy for others into a virtue.

Appropriately enough, it’s difficult to adequately summarise Audrey’s argument in this post because it’s nuanced — as the best writing usually is. As she points out, an important part of reading widely is developing empathy. For example, while I still hold a very low opinion of Vladimir Putin, make a lot more sense when put in the context of a 1,000 year narrative arc. It would have been difficult to come to that realisation watching a short YouTube video or social media thread.

Reading can be slow. It can be quite challenging work – and not simply because our attention has been increasingly conditioned, fragmented with distractions and disruptions. And yet from the considered effort of reading comes consideration. So it isn’t simply that we no longer read at length or read deeply; we no longer value contemplation.

[…]

If, as some scholars argue, learning to read does not just build cognition but helps develop empathy – that is, young readers become immersed in stories outside their own experience and thus see the world differently – what are the implications when adults cannot bother to tell stories to their children?

Source: Second Breakfast

Image: Matias North

Hamming questions

Pebbles with a stone featuring a question mark.

In his most recent newsletter, Ben James shared some “important snippets” from things that he read over the holidays. It included a post from 2019 on ‘The Hamming Question’ which I really like, and focuses the mind somewhat. I perhaps need to think about what the Hamming questions are in the areas in which I work.

Mathematician Richard Hamming used to ask scientists in other fields “What are the most important problems in your field?” partly so he could troll them by asking “Why aren’t you working on them?” and partly because getting asked this question is really useful for focusing people’s attention on what matters.

Source: LessWrong

Best of Thought Shrapnel 2024

Gold 3D render of the number '2024'.

Well, here we are at the end of another year! My sole criterion for inclusion in this ‘best of’ list is that the articles I reference made me think. Reinforcing my existing views, or being merely ‘interesting’ wasn’t enough to make it. So, after whittling down from twenty or so, here are my top ten Thought Shrapnel posts of 2024:

  1. De-bogging yourself — Adam Mastroianni’s topic is getting yourself out of a situation where you’re stuck, which he calls “de-bogging yourself”. I love the way he breaks it down into three different kinds of ‘bog phenomena’ and gives names to examples which fall into those categories.
  2. The importance of context — I can highly recommend this conversation between Adam Grant and Trevor Noah. The conversation they have about context towards the start is so important that I wish everyone I know would listen to it.
  3. Begetting Strangers — This is such a great article by Joshua Rothman in The New Yorker. Quoting philosophers, he concisely summarises the difficulty of parenting, examines some of the tensions, and settles on a position with which I’d agree.
  4. Man or bear IRL — This article by Laura Killingbeck is definitely worth reading in its entirety. Not only is it extremely well-written, it gives a real-world example to a hypothetical internet discussion. Killingbeck is a long-term ‘bikepacker’ and therefore the “man or bear” question is one she grapples with on a regular basis.
  5. Philosophy and folklore — I love this piece in Aeon from Abigail Tulenko, who argues that folklore and philosophy share a common purpose in challenging us to think deeply about life’s big questions. Her essay is essentially a critique of academic philosophy’s exclusivity and she calls for a broader, more inclusive approach that embraces… folklore.
  6. ‘Meta-work’ is how we get past all the one-size-fits-none approaches — Alexandra Samuel points out in this newsletter that a lot of the work we do as knowledge workers will increasingly be ‘meta-work’. Introducing a 7-step approach, she first of all outlines why it’s necessary, especially in a ‘neurovarious’ world.
  7. We become what we behold — An insightful and nuanced post from Stephen Downes, who reflects on various experiences, from changing RSS reader through to the way he takes photographs. What he calls ‘AI drift’ is our tendency to replace manual processes with automated ones.
  8. You don’t have to like what other people like, or do what other people do — Warren Ellis responds to a post by Jay Springett on ‘surface flatness’ by reframing the problem as… not one we have to worry about. It’s good advice: so long as you can sustain an income by not having to interact with online walled gardens, why care what other people do?
  9. 3 strategies to counter the unseen costs of boundary work within organisations — This article focuses on research that reveals people who do ‘boundary work’ within organisations, that is to say, individuals who span different silos, are more likely to suffer burnout and exhibit negative social behaviours.
  10. Dark data is a climate concern — I mean, yes, of course I knew that data files are stored on servers and that those servers consume electricity. But this is a good example of reframing. How many emails have I got stored that I will never look at again? How many files stored in the cloud ‘just in case’?

Thanks for reading and sharing Thought Shrapnel this year! I’ll be back in 2025 🎉

I'm increasingly uneasy about being a Spotify Premium subscriber

Union of Musicians and Allied Workers protesting at Spotify's corporate headquarters in San Francisco. A person in black jacket and face covering holds a sign demanding a penny per stream.

In 2009, seeing which way the wind was blowing, I decided to sell my CD collection and use the proceeds to fund streaming my music via Spotify. Fifteen years later, factoring in price rises and an upgrade to the family version, I’ve probably spent about £2,000. So I reckon I’m about even.

I’ve really enjoyed using Spotify. I like the way it’s available everywhere, including on my Google Home devices and in my car. It’s learned my tastes and I’ve discovered all kinds of music through the service.

However, I’ve felt increasingly guilty about the way that Spotify, and other music streaming services, treat artists. We’re now in a situation where artists have to tour to make a living. I’m not sure that’s necessarily healthy.

Also, given Sabrina Carpenter seems to show up on every playlist I ask Spotify to create at the moment (including ‘hardcore gym rap’!) I’m pretty sure they are also making a lot of money from paid placements. My unease is only compounded with the revelations in this article which details the ways that Spotify have actively tried to reduce the amount of royalties paid to artists.

Perhaps it’s time to move on. Perhaps the answer is to go back to MP3s and use a platform such as Bandcamp? 🤔

According to a source close to the company, Spotify’s own internal research showed that many users were not coming to the platform to listen to specific artists or albums; they just needed something to serve as a soundtrack for their days, like a study playlist or maybe a dinner soundtrack. In the lean-back listening environment that streaming had helped champion, listeners often weren’t even aware of what song or artist they were hearing. As a result, the thinking seemed to be: Why pay full-price royalties if users were only half listening? It was likely from this reasoning that the Perfect Fit Content program was created.

After at least a year of piloting, PFC was presented to Spotify editors in 2017 as one of the company’s new bets to achieve profitability. According to a former employee, just a few months later, a new column appeared on the dashboard editors used to monitor internal playlists. The dashboard was where editors could view various stats: plays, likes, skip rates, saves. And now, right at the top of the page, editors could see how successfully each playlist embraced “music commissioned to fit a certain playlist/mood with improved margins,” as PFC was described internally.

[…]

Some employees felt that those responsible for pushing the PFC strategy did not understand the musical traditions that were being affected by it. These higher-ups were well versed in the business of major-label hitmaking, but not necessarily in the cultures or histories of genres like jazz, classical, ambient, and lo-fi hip-hop—music that tended to do well on playlists for relaxing, sleeping, or focusing. One of my sources told me that the attitude was “if the metrics went up, then let’s just keep replacing more and more, because if the user doesn’t notice, then it’s fine.”

[…]

In a Slack channel dedicated to discussing the ethics of streaming, Spotify’s own employees debated the fairness of the PFC program. “I wonder how much these plays ‘steal’ from actual ’normal’ artists,” one employee asked. And yet as far as the public was concerned, the company had gone to great lengths to keep the initiative under wraps. Perhaps Spotify understood the stakes—that when it removed real classical, jazz, and ambient artists from popular playlists and replaced them with low-budget stock muzak, it was steamrolling real music cultures, actual traditions within which artists were trying to make a living. Or perhaps the company was aware that this project to cheapen music contradicted so many of the ideals upon which its brand had been built. Spotify had long marketed itself as the ultimate platform for discovery—and who was going to get excited about “discovering” a bunch of stock music? Artists had been sold the idea that streaming was the ultimate meritocracy—that the best would rise to the top because users voted by listening. But the PFC program undermined all this. PFC was not the only way in which Spotify deliberately and covertly manipulated programming to favor content that improved its margins, but it was the most immediately galling. Nor was the problem simply a matter of “authenticity” in music. It was a matter of survival for actual artists, of musicians having the ability to earn a living on one of the largest platforms for music. PFC was irrefutable proof that Spotify rigged its system against musicians who knew their worth.

Source: Harper’s Magazine

People aren't unemployed because they're lazy

About a quarter of the British working age population (ages 16-64) does not have a job. There are many reasons for this, but the right-wing view on this is that “benefits are too generous.” I think we can put bed with this chart from the University of Bath (2019):

Chart showing UK in last place in terms of generosity around unemployment insurance amongst OECD countries

Reducing benefits that are already some of the lowest in the developed world isn’t likely to get people working again, it just causes misery and has knock-on effects such as an increase in the amount of shoplifting for food and other essential items.

Not only are British unemployment benefits low, but they’re also split in a way which is massively skewed towards housing benefit, as even commentators in the right-wing Sunday Times have to admit:

Chart comparing different countries'  percentage 'replacement rate' of unemployment benefits compared to previous salary

Unsurprisingly, state-level economics is fiendishly difficult and nothing at all like running household finances. Here’s a very simple system diagram from an article in the journal Social Policy & Administration from earlier this year which discusses 24 European countries and macroeconomic variables:

Simple system diagram linking economic and employment policies to job insecurity and job quality.

There are two things that it seems the British political class don’t want to talk about. The first is Brexit, an act of almost unimaginable economic harm that has meant 15% lower trade with the EU, and cost the economy over £140 billion so far. The second is the long-term health impact of the pandemic, with the related effects on the number of people working.

All in all, we need a grown-up conversation about this, based on data. But with Reform UK waiting in the wings, potentially financed by the world’s richest person, the chances are we’ll continue with knee-jerk reactions and shallow thinking for the foreseeable future.

Substack bros

Mug on desk with writing on it which reads: 'Everyone is entitled to my opinion'

Having a moral compass can sometimes make life more difficult. I literally turned down a ridiculously well-paid gig last month because it contravened my ethical code. While that particular example was relatively clear cut, it’s more difficult when it comes to things like platforms which are used for free. At what point does your use of it become out of alignment with your values?

Twitter turning to X is a good example of this, with some people leaving a long time ago (🙋) while others, for some inexplicable reason, are still on there. I’d argue that the next service to be recognised as toxic is probably going to be Substack. I hosted Thought Shrapnel there briefly for a few weeks at the end of last year, but left when they started platforming Nazis. They seem to be at it again (here’s an archive version as that link was down at the time of writing).

While I wanted to give that context, this post is actually about a particular style of writing that is popular on Substack. I discovered this via Robin Sloan’s newsletter, which (thankfully) is written in a style at odds with the opposite of the advice given by Max Read, a relatively-successful Substacker. What Read says about being a “textual YouTuber” is spot-on. I can’t imagine anything more awful than watching video after video, but I will read and read until the proverbial cows come home.

The other thing which I think Read gets right is something I was discussing the other day (IRL I’m afraid, no link!) about how everyone wants Strong Opinions™ these days and to be the “main character.” My own writing these days is almost the opposite of that: slightly philosophical, with provisional opinions and, while introspective, not presenting myself as the hero of the story.

My standard joke about my job is that I am less a “writer” than I am a “textual YouTuber for Gen Xers and Elder Millennials who hate watching videos.” What I mean by this is that while what I do resembles journalistic writing in the specific, the actual job is in most ways closer to that of a YouTuber or a streamer or even a hang-out-type podcaster than it is to that of most types of working journalist. (The one exception being: Weekly op-ed columnist.) What most successful Substacks offer to subscribers is less a series of discrete and self-supporting pieces of writing–or, for that matter, a specific and tightly delimited subject or concept–and more a particular attitude or perspective, a set of passions and interests, and even an ongoing process of “thinking through,” to which subscribers are invited. This means you have to be pretty comfortable having a strong voice, offering relatively strong opinions, and just generally “being the main character” in your writing. And, indeed, all these qualities are more important than any kind of particular technical writing skill: Many of the world’s best (formal) writers are not comfortable with any of those things, while many of the world’s worst writers are extremely comfortable with them.

So, part of your job as a Substacker is is “producing words” and part of your job is “cultivating a persona for which people might have some kind of inexplicable affection or even respect.”

Source: Read Max

Image: Steve Johnson

Navigating the clash of identity and ability

Distorted ('glitched') photo of a man

I had a great walk and talk with my good friend Bryan Mathers yesterday. He made the trip up from London to Northumberland, where I live, and we went walking in the Simonside Hills and at Druridge Bay.

One of our many topics of conversation was the various seasons of life, including our kids leaving home, doing meaningful work, and social interaction.

Our generation is perhaps the first where men getting help through therapy is at least semi-normal, where it’s OK to talk about feelings, and where there’s the beginnings of an understanding that perhaps work shouldn’t define a man’s life.

What’s interesting about this article in The Guardian by Adrienne Matei is the framing as a “clash of identity and ability.” I’m already experiencing this on a physical level with my mind thinking I’m capable of running, swimming, and jumping much further than I’m able. It’s frustrating, but as the article points out, a nudge that I need to be thinking about my life differently as I approach 44 years old.

In 2023, researchers from the University of Michigan and the University of Alabama at Birmingham published a study exploring how hegemonic masculinity affects men’s approach to health and ageing. “Masculine identity upholds beliefs about masculine enactment,” the authors write, referring to the traits some men feel they must exhibit, including control, responsibility, strength and competitiveness. As men age, they are likely to feel pressure to remain self-reliant and avoid perceived weakness, including seeking medical help or acknowledging emerging challenges.

The study’s authors write that middle-aged men might try to fight ageing with disciplined health and fitness routines. But as they get older and those strategies become less successful, they have to rethink what it means to be “masculine”, or suffer poorer health outcomes. Accepting these identity shifts can be particularly difficult for men, who can exhibit less self-reflection and self-compassion than women.

[…]

[Dr Karen Skerrett, a psychotherapist and researcher] emphasizes there is no tidy, one size fits all way to navigate the clash of identity and ability: “There is just so much diversity that we can’t particularly predict how somebody is going to react to limitations,” she says.

However, in a 2021 research report she and her co-authors proposed six tasks to help people develop a “realistic, accommodating and hopeful” perception of the future: acknowledging and accepting the realities of ageing; normalizing angst about the future; active reminiscence; accommodating physical, cognitive and social changes; searching for new emotionally meaningful goals; and expanding one’s capacity to tolerate ambiguity. These tasks help people to recharacterize ageing as a transition that requires adaptability, growth and foresight, and to resist “premature foreclosure”, or the notion that their life stories have ended.

As we age, managing our own egos becomes a bigger psychological task, says Skerrett. We may not be able to do all the things we once enjoyed, but we can still ask ourselves how we can contribute and support others in meaningful ways. Focusing on internal growth and confronting hard truths with grace and clarity can ease confusion, shame and anger. Instead of clinging to lost identities, we can seek purpose in connection, legacy and gratitude.

Source: The Guardian

Smartphone bans are not the answer

Screenshot for 'Swiped' featuring the presenters and children in uniforms using smartphones

After reading that “every parent should watch” a Channel 4 TV programme called Swiped: The School That Banned Smartphones I dutifully did so this afternoon. I’m off work, so need something to do after wrapping presents 😉

I thought it was poor, if I’m honest. As a former teacher and senior leader, and the father of two teenagers (one who has a real issue with screen time) I thought it was OK-ish as a conversation starter. But the blunt instrument of a ‘ban’, as is apparently going to happen in Australia, just seems a bit laughable to be honest. How are you supposed to develop digital literacies through non-use?

It’s easy to think that a problem you and other people are experiencing should be solved quickly and easily by someone else. In this case, the government. But this is a systemic issue, and not as easy as the government ‘forcing’ tech platforms to do something about it. What about the chronic underfunding of youth activities and child mental health services, and the slashing of council budgets? Smartphones aren’t the only reason kids sit in their rooms.

In March 2025, the Online Safety Act comes into force. The intention is welcome, but as with the Australian ‘ban’ it’s probably going to be hard to make it work.

The kids in the TV experiment were 12 years old. If, at the end of 2024, you’re letting your not-even-teenager on a smartphone without any safeguards, I’m afraid you’re doing it wrong. If you’re allowing kids of that age to have their phones in their bedroom overnight, you’re doing it wrong. That’s not something you need a ban to fix.

Smartphones, just like any technology, aren’t wholly positive or wholly negative. There are huge benefits and significant drawbacks to them. What’s more powerful in this situation are social norms. If this programme helps to start a conversation, then it’s done its job. I’m just concern that most people are going to take from it the message that “the government needs to sort this out.”

Source: Channel 4

Universities in the age of AI

The image features a black graduation cap set against a bright blue background. The cap, traditionally square-shaped with a small button at the top center, is positioned to showcase its structure. Attached to the button is a tassel made of multicolored electrical wires, with several loose wire ends visible. The vibrant wires are in various colors, including red, yellow, green, and blue, creating a striking contrast against the solid black fabric and the smooth blue backdrop.

Generative AI tools like ChatGPT, Claude, and Perplexity are now an integral part of my workflow. This is true of almost everything I produce these days, including this post (I used this tool to create the image alt text).

I use genAI in client work, and also in my academic studies. It’s incredibly useful as a kind of ‘thought partner’ and particularly handy in doing a RAG analysis of essays in relation to assignments. Do I use it to fabricate the answers to assessed questions which I then submit as my own work? No, of course not.

This article in The Guardian reports from the frontlines of the struggle in universities for academic rigour and against cheating. Different institutions are approaching the issue differently, as you would expect. The answer, I would suggest, is something akin to Cambridge University’s AI-positive approach, outlined in the quoted text below.

The whole point of Higher Education is to allow students to reflect on themselves and the world. It’s been my experience that using genAI in appropriate ways is an incredibly enriching experience. Especially given that my Systems Thinking modules focus on me as a practitioner in relation to a specific situation in my life, what would it even mean to “cheat”?

I was notified this morning that I received a distinction for my latest module, as I did for the one before it. Would I have achieved those grades without using genAI? Maybe. Probably, even, given I’ve already got a doctorate. But the experience for me as a distance learner was so much better than being limited to interactions with my (excellent) tutor and fellow students in the online forum.

At the end of the day, I’m studying for my own benefit, and I know that studying with genAI is better than studying without it. I’m very much looking forward to using Google’s latest upgrade to Gemini Live for my next module, which I found recently to be very useful to conversationally prepare for interviews!

More than half of students now use generative AI to help with their assessments, according to a survey by the Higher Education Policy Institute, and about 5% of students admit using it to cheat. In November, Times Higher Education reported that, despite “patchy record keeping”, cases appeared to be soaring at Russell Group universities, some of which had reported a 15-fold increase in cheating. But confusion over how these tools should be used – if at all – has sown suspicion in institutions designed to be built on trust. Some believe that AI stands to revolutionise how people learn for the better, like a 24/7 personal tutor – Professor HAL, if you like. To others, it is an existential threat to the entire system of learning – a “plague upon education” as one op-ed for Inside Higher Ed put it – that stands to demolish the process of academic inquiry.

In the struggle to stuff the genie back in the bottle, universities have become locked in an escalating technological arms race, even turning to AI themselves to try to catch misconduct. Tutors are turning on students, students on each other and hardworking learners are being caught by the flak. It’s left many feeling pessimistic about the future of higher education. But is ChatGPT really the problem universities need to grapple with? Or is it something deeper?

[…]

What counts as cheating is determined, ultimately, by institutions and examiners. Many universities are already adapting their approach to assessment, penning “AI-positive” policies. At Cambridge University, for example, appropriate use of generative AI includes using it for an “overview of new concepts”, “as a collaborative coach”, or “supporting time management”. The university warns against over-reliance on these tools, which could limit a student’s ability to develop critical thinking skills. Some lecturers I spoke to said they felt that this sort of approach was helpful, but others said it was capitulating. One conveyed frustration that her university didn’t seem to be taking academic misconduct seriously any more; she had received a “whispered warning” that she was no longer to refer cases where AI was suspected to the central disciplinary board.

If anything, the AI cheating crisis has exposed how transactional the process of gaining a degree has become. Higher education is increasingly marketised; universities are cash-strapped, chasing customers at the expense of quality learning. Students, meanwhile, are labouring under financial pressures of their own, painfully aware that secure graduate careers are increasingly scarce. Just as the rise of essay mills coincided with the rapid expansion of higher education in the noughties, ChatGPT has struck at a time when a degree feels more devalued than ever.

Source: The Guardian

Sunrise, solar noon and sunset times for 2025 (in Dublin)

Screenshot of website showing sunrise and sunset times, solar noon, etc.

Most people probably have a favourite weather app. Mine is the oddly-named Weawow, for three reasons. First, it looks good; second, it allows you to choose the data source for weather forecasts; third, it shows sunrise, sunset, and ‘golden hour’ times in a really handy way.

I stumbled across a website today from Éibhear Ó hAnluain, a software engineer who lives in Dublin. Where I live is within 2 degrees latitude of there, so the timings are approximately correct for my location too. If someone knows a quick and easy way of generating a similar page for anywhere in the world, let me know!

Source: Éibhear/Gibiris

'Social' social networks?

Mozi notification

I notice that Ev Williams, founder of Blogger, Twitter, and Medium, has co-founded a new social app called Mozi. It’s iOS-only for now, and seems to be reinventing some of the functionality of Foursquare check-ins with the private aspect of Path.

Path is the best social network I’ve ever used. I only used it with my family, but as I mentioned when lamenting its demise in 2018, it had the perfect mix of features. As I also hinted at in that post, for-profit private social networks just aren’t sustainable. We never did find anything to replace it, and Signal chats just aren’t the same.

Mozi seems to be based on people making travel plans and then serendipitously bumping into each other. I’d suggest this is already a solved problem for younger generations through Snap Maps, meaning it’s a firmly middle-aged problem. For that demographic, they’re probably likely to be travelling less. And if they’re British, a good proportion would pay money not to awkwardly bump into people they kind-of know 😅

Williams is a billionaire at this point, so he can do what he likes. But, inevitably, I’ll be pointing back to this post in less than two years when it shuts down. So I won’t be bothering to set up an account, even when it comes to Android.

When you spend your life building internet platforms, it’s hard to quit the habit. So while trying to get a grasp on the people I knew to invite to my birthday, I started thinking: What if we did have a network designed for this purpose? Not just invites, but a map of the people we actually knew and tools for enhancing those relationships?

In other words, what would an actually social network look like?

Clearly, it would need to be private. Non-performative. No public profiles. No public status competitions. No follower counts. No strangers.

Source: Medium

The lifehacked, minimalist life (and its discontents)

Photo illustration of a man in front of a clock and dots in a graph.

I used to be all about the life hacking when I was younger: optimising my time and ensuring maximum productivity was my goal. It made sense for that period of my life, as when I was in my twenties I was teaching full-time, pursuing a doctorate, and starting a family. Time seemed in short supply.

It wasn’t just me, though. There was very much what seemed a movement around this. Yes, it was mainly younger white men, but I admit to not realising that was the case at the time. This article by Laura Miller reflects on that time through the lens of a new book entitled Hacking Life: Systematized Living and Its Discontents. Ultimately, was it just about young men finding ways to do things that their mothers used to do for them? 🤔

The notion of hacking “life” arose during a period when technology was achieving one minor marvel after another, and “disruption” could still be touted as an unalloyed good. Yes, a tech bubble had burst at the beginning of the decade, but that was viewed as a failure of business models, not the tech itself… Smartphones seemed almost magical in their ability to iron the hassles and uncertainty out of everyday activities. You no longer had to give people directions to your house, rustle up a newspaper to find out where the movie you wanted to see was playing, or pick a restaurant with no idea what other diners thought about it.

[…]

As you’ve surely realized by now, it is possible to devote so much time to organizing your work that you never actually do any of it. As Reagle observes, several of the early champions of life hacking, include O’Brien and Mann, signed contracts to write books about how to defeat procrastination and attain Inbox Zero and then never got around to writing them. Most of them dropped out of the scene entirely, abandoning their blogs and denouncing the tech world’s preoccupation with productivity.

Others became proponents of minimalism, an ethos that involves getting rid of almost all of your stuff while becoming even more obsessed with the few things you keep. They sold their houses and moved into RVs. Like Marie Kondo on overdrive, they aimed to fit everything they owned into a single backpack.

[…]

Of course, plenty of people live in RVs and don’t own much because they have no other option; nobody asks them to give TED talks about it. Reagle points out that minimalism has been a phenomenon of young, educated, affluent white men supposedly repudiating a middle-class materialism made possible by their careers in the tech industry and lack of family encumbrances. “Minimalism is for well-off bachelors,” as Reagle puts it, and not especially imaginative ones at that. If you make your fortune at 30 and you’re the sort of person who’s never given much thought to a purpose beyond “success,” what do you do with yourself? A common and strikingly unimaginative answer among minimalists was full-time travel. The possibility that experiences can be accumulated and consumed in just as mindless a fashion as belongings can did not occur to them.

[…]

As one anonymous wag has observed, the vast resources of Silicon Valley have too often been applied to the problem of “what is my mother no longer doing for me?” Don’t get me wrong: I remain in the market for solid, practical tips. But life, like a palm tree or any other organic thing, can only take so much hacking before it collapses.

Source: Slate

Hierarchies should be fluid and temporary

Illustration of a speech bubble reading 'My boss said...' with a no-entry style cross through it.

Last week, I shared a post from this same website, an ‘advent calendar’ of blogging where people reflect what they’ve thought a lot about over the last year.

This entry is about comfort zones and systems change. The author makes their living in ‘systems change’ and points out that it’s natural for things to evolve and change over time. Not to allow this to happen privileges the few over the many. It’s a good way of thinking about it.

I’ve included an image that Bryan Mathers made for our co-op last year to illustrate my anti-establishment tendencies. I’m almost embarrassed for other people when they use the phrase “My boss said…” as if it’s a normal thing. Any hierarchies should be fluid and temporary.

On the surface, it can seem like people’s resistance to making things better is down to their fear of the unknown, and they lean into the idea of ‘better the devil you know’. However, I’m eight years into this gig, and actually, what I’ve observed is that it’s the complexity that comes with imagining the world anew that people don’t like.

[…]

They find it destabilising when new ways of being emerge because – in order to adopt them – it would mean straying from a well-trodden route. New ideas threaten to force people to create new pathways, adapting to unfamiliar scenarios as they go.

The reality, though, is that this is how life works. We can manufacture fixed systems that seek to impose rigid structures – for example, hierarchies, competition and individualism have all been created. But, at its core, the world shifts and alters and adapts. You only have to look at the natural world to see how life constantly evolves, or the universe to recognise we’re constantly expanding.

By resisting change, we are upholding the manufactured systems that we are forced to live within. The same systems that are rigged against us.

Because those systems are familiar. They are societal norms. They are known.

As long as we are resistant to change, we allow power to be consolidated in the hands of a dominant few who get to shape the media, government and organisations which prescribe how we live our lives.

Source: I thought about that a lot

Image: CC BY-NC Visual Thinkery for WAO

Anxiety as an expensive habit

The word 'Anxiety' drawn on a mobile phone

I’m not sure if this post by Ryan Holiday is just a form of (not-so) subtle marketing for his ‘Anxiety Medallion’ but he nevertheless makes some good points. Framing anxiety as his “most expensive habit,” Holiday talks about what anxiety “steals” from us.

Without wanting to wade too much into the nature vs nurture debate, I think it’s clear that genetics provides some kind of baseline level here. For me, that’s both incredibly frustrating (you can’t choose your ancestors!) but also somewhat liberating. I can’t remember where I learned to do so, but over the last 18 months or so I’ve started saying to myself “it’s all just chemicals in my brain.”

It doesn’t always work, of course, but along with good exercise and sleep routines — and ensuring my stress levels remain low — I manage to cope with it all. The hardest thing to explain to people is that anxiety doesn’t have to have an object. Existential angst, for example, isn’t just something that 19th century philosophers suffered from, but regular people in the here and now.

It’s not flashy, it’s not thrilling, and it doesn’t even provide the fleeting pleasures that other vices might. And yet, anxiety is a vice. A habit. A relentless one that eats away at your time, your relationships, and your moments of joy.

[…]

Seneca tells us we suffer more in imagination than in reality. Anxiety turns the hypothetical into the actual. It drags us into a future that doesn’t yet exist and forces us to live out every worst-case scenario in vivid detail. The cost isn’t just mental. It’s physical. It’s emotional. It’s relational.

[…]

Anxiety is expensive—not just in terms of the mental toll, but in the way it costs us our lives. Every minute spent consumed by worry is a minute lost.

Source: Ryan Holiday

Image: Nik

Yeah, but how?

Graffiti saying THE WORLD IS OURS

I listen to a popular podcast called The Rest is Politics. I remember listening before the US Presidential Election where the hosts could not bring themselves to believe that Trump would successfully win a second term. Why? Because he has “no ground game.” That is to say, he doesn’t have the processes set up to be able to mass-mobilise supporters to knock on doors, get the word out, and encourage people to vote.

Given the results, that’s increasingly looking like 20th century thinking. I’ve heard anecodotes of people knocking on doors and people already having talking points from following social media influencers and watching YouTube videos. If people have already made up their mind based on things they’ve seen on the small screen they carry around with them everywhere, knocking on their door every few years isn’t going to change their mind.

This is why social media is so important. This post argues that we need to be creating new spaces, not just “meeting people where they are.” It’s not an incorrect position to take. I don’t disagree with anything in the post. But how exactly? Mastodon and the Fediverse more generally could have been the ‘ark’ to which people fled after leaving X/Twitter. Instead, they flocked to another “potentially decentralised” social network, with investors and no incentive to do anything other than what everyone else has done before.

I’d like to organise. I’d like to use Open Source software everywhere. I’d like to only buy things from co-ops. However, back in the real world where I need to interact with capitalism to survive…

It’s hard to ignore the fact that progressive movements, despite their critical rhetoric, rely on the same capitalist and surveillance-driven platforms that actively subvert their goals. Platforms like Google, Facebook, The Communication Silo Formerly Named Twitter, and Instagram—behemoths of surveillance capitalism—become the very spaces where activism happens. These corporations profit from our clicks, likes, and shares, capturing our data and feeding it into systems of control that profit from inequality, exploitation, and surveillance.

This ongoing reliance on corporate-owned platforms represents a deep contradiction in our movements. By using these tools, we are feeding the beast—the tech giants profiting from our data, monetizing our activism, and undermining the very causes we fight for. In a real sense, we’ve become complicit in our own subjugation, ceding our autonomy, values, and privacy to the very corporations that reinforce the inequalities we seek to dismantle.

[…]

The phrase “you have to meet people where they live” has been an all-too-convenient defense for this complicity. But this outlook only reinforces the status quo. Shouldn’t a genuinely radical movement—especially a socialist one—work toward building new spaces where people can live, organize, and act outside of these exploitative systems?

Socialist movements throughout history didn’t merely meet people in existing power structures—they created new models of organization, new forms of cooperation, and new spaces for living and working together. From cooperatives to unions, the goal has always been to build alternatives to the capitalist way of life. Why, then, should we treat digital space any differently?

[…]

We cannot keep organizing through the tools of surveillance capitalism if we want to build a post-capitalist future. We must take control of the infrastructure itself—through open-source, community-run platforms. This is not just about technical solutions, but about aligning our methods of organizing with our values and principles.

Source: Seize the Means of Community

Image: Intricate Explorer

Anti-anti-AI sentiment

Chip on circuitboard with letters 'AI' on it

I discovered this article via Laura who referenced it during our co-working session as we updated AILiteracy.fyi. As a fellow Garbage Day subscriber, she’d assumed I’d already seen it mentioned in that newsletter. I hadn’t.

What I like about this piece from Casey Newton is how he points out how disingenuous much of anti-AI sentiment is. There are people doing important, nuanced work pointing out the bullshit (hi, Audrey) but there’s also some really ill-informed, clickbaity stuff that reinforces prejudice.

Of course people will use generative AI to cheat. Of course they will use it to create awful things. But what’s new there? A lot of the hand-wringing I see is from people who have evidently never used an LLM for more than five seconds. They would have been the same people warning about the “dangers” of the internet in the late 90s because “anyone can create a website and put anything online!”

The thing is, while we can’t guarantee that any individual response from a chatbot will be honest or helpful, it’s inarguable that they are much more honest and more helpful today than they were two years ago. It’s also inarguable that hundreds of millions of people are already using them, and that millions are paying to use them.

The truth is that there are no guarantees in tech. Does Google guarantee that its search engine is honest, helpful, and harmless? Does X guarantee that its posts are? Does Facebook guarantee that its network is?

Most people know these systems are flawed, and adjust their expectations and usage accordingly. The “AI is fake and sucks” crowd is hyper-fixated on the things it can’t do — count the number of r’s in strawberry, figure out that the Onion was joking when it told us to eat rocks — and weirdly uninterested in the things it can.

[…]

Ultimately, both the “fake and sucks” and “real and dangerous” crowds agree that AI could go really, really badly. To stop that from happening though, the “fake and sucks” crowd needs to accept that AI is already more capable and more embedded in our systems than they currently admit. And while it’s fine to wish that the scaling laws do break, and give us all more time to adapt to what AI will bring, all of us would do well to spend some time planning for a world where they don’t.

Source: Platformer

The trials and tribulations of working openly

Neon sign saying 'open'

This advent series is published anonymously, but Matt Jukes outed himself as the author of this one. It makes sense him doing so, as it’s about working in the open, and how it’s benefited him — but now he feels like it’s time to “shut up.” For what it’s worth, I hope he doesn’t.

I’m sharing it here, though, as there are plenty of people who I know who share as openly as Jukesie, and who might be thinking about different seasons to their careers. I suppose I’m one of them. My wife has never been comfortable about my ‘oversharing’, especially in the early days of Twitter. That’s why I’ve toned down that aspect a bit over the years

There’s something about oversharing that feels like a focus on the self. But, as I was explaining to my daughter in relation to art just yesterday, you have to find the thing that allows you to represent yourself in the world. For me, it’s writing. For others it’s drawing, painting, or singing. Without that, it’s a sad, unexpressed life.

(It’s also well worth looking at the other essays in the series, as there’s some really good writing here.)

People I’ve never met in person are familiar with my ups and downs at work, my health, my travels and my ambitions. My openness has been called brave, inspiring, narcissistic and irritating. It’s provided me with an army of acquaintances around the world, but probably no more close friends than if I’d never popped my head above the parapet and uttered (or written) a word.

I wear my commitment to working in the open as a badge of honour and have spent years advocating for others to follow suit.

The problem though, and the reason I’ve been thinking a lot about it, is that I am tired of it and really feel like it is time to shut up. I don’t know whether those peak Covid years rewired something in my head, or whether it is just a by-product of getting older, but the energy required to maintain quite so public a persona has become unsustainable, and increasingly less enjoyable. The challenge though, is that my professional identity is so entangled in my openness, I fear what would happen if I did quiet down.

This fear is my own fault. My career has become a patchwork of short-term jobs, generated by a short attention span, and held together by a loose theme and a high profile. If the profile declines, will it all tumble down like a house of cards?

Source: I thought about that a lot

Captive user bases are ripe for enshittified services

Poo emoji

I missed this when he published it last year, but this strongly-worded and reasoned stuff from Cory Doctorow still applies. He explains why he’ll only ever be found on actually federated social networks. The word he coined, enshittification, can only applied to a captive user base. It makes me think about what I’m doing on Bluesky — which I’ve already described a ‘pound shop Mastodon’.

Look, I’m done. I poured years and endless hours into establishing myself on walled garden services administered with varying degrees of competence and benevolence, only to have those services use my own sunk costs to trap me within their silos even as they siphoned value from my side of the ledger to their own.

[…]

Being a moral actor lies not merely in making the right choice in the moment, but in anticipating the times when you may choose poorly in future, and taking steps to head that off.

[…]

That’s where Ulysses Pacts come in. […] We make little Ulysses Pacts all the time. If you go on a diet and throw away your Oreos, that’s a Ulysses Pact. You’re not betting that you’ll be strong enough to resist their siren song when your body is craving easily available calories; rather, you are being humble enough to recognize your own weakness, and strong enough to take a step to protect yourself from it.

[…]

I have learned my lesson. I have no plans to ever again put effort or energy into establishing myself on an unfederated service. From now on, I will put more weight on how easy it is to leave a service than on what I get from staying. A bad service that you can easily switch away from is incentivized to improve, and if the incentive fails, you can leave.

Source: Pluralistic

Pleias: a family of fully open small AI language models

8 photos of the same tree taken at different times of the year in 2 rows of 4, the last photo is highly pixelated. A pattern of random white blocks run across the image from the left and become aligned on the right.

I haven’t had a chance to use it yet, but this is more like it! Local models that are not only lighter in terms of environmental impact, but are trained on permissively-licensed data.

Training large language models required copyrighted data until it did not. Today we release Pleias 1.0 models, a family of fully open small language models. Pleias 1.0 models include three base models: 350M, 1.2B, and 3B parameters. They feature two specialized models for knowledge retrieval with unprecedented performance for their size on multilingual Retrieval-Augmented Generation, Pleias-Pico (350M parameters) and Pleias-Nano (1.2B parameters).

These represent the first ever models trained exclusively on open data, meaning data that are either non-copyrighted or are published under a permissible license. These are the first fully EU AI Act compliant models. In fact, Pleias sets a new standard for safety and openness.

Our models are:

  • multilingual, offering strong support for multiple European languages
  • safe, showing the lowest results on the toxicity benchmark
  • performant for key tasks, such as knowledge retrieval
  • able to run efficiently on consumer-grade hardware locally (CPU-only, without quantisation)

[…]

We are moving away from the standard format of web archives. Instead, we use our new dataset composed of uncopyrighted and permissibly licensed data, Common Corpus. To create this dataset, we had to develop an extensive range of tools to collect, to generate, and to process pretraining.

Source: Hugging Face

Image: David Man & Tristan Ferne

AI Literacies are plural

This image shows an individual with orange hair interacting with a large, abstract digital mirrored structure. The structure is composed of squares in varying shades of green, orange, white, and black which are pieced together to reflect the individual’s figure. The figure's hand is extended as if pointing to or interacting with the mirrored structure. Behind the  structure are streams of binary code (0s and 1s) in orange, flowing towards the digital grid.

I see a lot of AI Literacy frameworks at the moment. Like this one. From my perspective, most of them make similar mistakes, thinking in terms of defined ‘levels’ using some kind of remix of Bloom’s Taxonomy. There’s also an over-emphasis on cognitive aspects such as ‘understanding’ while more community and civic-minded aspects are often under-emphasised.

So if you think that I’m ego-posting this page, created by Angela Gunder for Opened Culture, then you’d be correct. I met Angela for the first time via video conference a few weeks ago after she sent me an email telling me how she’d been using my work for years. We’ve had a couple more chats since and I’m hoping we’ll get to work together in the coming months.

Recently, Angela has been doing work for UNESCO, as well as on an MIT/Hewlett Foundation funded project. For both, she used my Essential Elements of Digital Literacies as a frame, understanding literacies as plural and contextual. WAO is currently working on an update to ailiteracy.fyi so more on all of this in the new year.

The Dimensions of AI Literacies were developed to address the growing need for educators, learners, and leaders to navigate the complexities of AI in education. Remixed from the work of Doug Belshaw’s Essential Elements of Digital Literacies, this approach recognizes that AI literacies are not a binary of literacy vs. illiteracy, but rather consist of a diverse and interconnected set of competencies. By considering AI literacies as a plurality, this taxonomy enables a deeper understanding of how AI can be leveraged to improve the impact of teaching and learning across various sociocultural contexts. This view helps educators design inclusive and adaptive learning experiences, allows learners to engage with AI tools critically and creatively, and empowers leaders to foster responsible and impactful AI integration across their institutions. Additionally, as AI tools and systems continue to expand in quantity and ability, this taxonomy gives strategists and practitioners a flexible vocabulary to use in navigating the rapidly evolving landscape of AI in education. Through these dimensions, educators and leaders are provided with a foundation for building a collaborative and reflective discourse on AI use, encouraging the development of skills that will shape the future of education in meaningful and impactful ways.

Source: Opened Culture | Dimensions of AI Literacies

Image: Yutong Liu & Kingston School of Art

I hope someday soon I can visit your website

gray concrete bricks painted in blue

When I worked at the Mozilla Foundation a decade ago, there was a programme called Webmaker. There were web apps and Open Source tools that the team created to help people learn how the web works in a practical, hands-on way. My work on web literacy and Open Badges underpinned it (see this white paper for example) with the aim to avoid ‘elegant consumption’.

In this post Gita Jackson points out that elegant consumption via centralised, Big Tech-owned social networks has won the day. The way to resist that is to build your own website. Learning some code is great, but these days Mozilla has a service called Solo which makes it super-easy to have your own website. There are easy ways to run your own blog. It takes more effort, but it’s worth it.

Social media erased the need to build a website to express yourself online. Sure, early social media like MySpace allowed for you to radically change the look and feel of your page—adding music and changing the background—but ultimately, it was still a MySpace page, with a comment wall and your top eight friends. On top of that, MySpace had total ownership of that page, meaning when the site was bought and sold, individual users had no say in the changes. By 2019, you couldn’t even look at your old MySpace accounts anymore because they lost all the data from prior to 2016.

This only accelerated as we moved to new social media, like Facebook, which was determined to keep all important contact information within the app. Instead of a local business making a website on Geocities, they would make themselves a Facebook page—or now, an Instagram account—because all their customers likely had Facebook accounts already.

[…]

It is clear that tech billionaires like Musk know that when they own the means of communication, they run the whole show. If you’ve made a home on Twitter, you’re basically completely vulnerable to Musk’s randomly changing whims, and also his disgusting political beliefs. He campaigned with Trump and immediately congratulated him when he won the election. Also congratulating the President-elect were Zuckerberg and Amazon’s Jeff Bezos, two tech oligarchs who also want us to use their proprietary apps and websites for everything in our lives.

[…]

To me, having my own website, even one I run as a business with my friends, gives me a degree of freedom over my own work that I’ve never had before. If you look at my work on Kotaku, there’s so many garbage ads on the screen you can barely see the words. Waypoint and Motherboard are both being run like a haunted ship, pumping out junk so that Vice’s new owners can put ads on it. I don’t have to worry about that anymore—I don’t have to worry about my work being taken down or modified or sold, or put in an AI training set against my will. I have my own website, and it is mine, and I get to own it completely. I hope someday soon I can visit your website.

Source: Aftermath

Image: Patrick Tomasso

Should health tech be used to inform health professionals?

Apple Watch

There are risks with any kind of increased information or data presented to people without the kind of background to understand it. That’s why we have professionals.

There’s also a concern about privacy and data getting into the wrong hands. That’s why we have safeguards.

But, on the other hand, when it comes to smart watches and other health monitoring devices, we’re talking about trying to better understand our own bodies. I’ve got a Garmin smart watch which I use in conjunction with the Garmin app and also with Strava. You’ll pry it from my cold, dead hands

There was one time when I went to the hospital and the consultant was interested in what my watch had been telling me. But, as this article shows, that’s rare.

What we need is some kind of standard way of reporting this data, along with caveats about how it was collected and how much it can be trusted.

Health Secretary Wes Streeting has talked about a proposal to give wearables to millions of NHS patients in England, enabling them to track symptoms such as reactions to cancer treatments, from home.

But many doctors – and tech experts – remain cautious about using health data captured by wearables.

I’m currently trying out a smart ring from the firm Ultrahuman – and it seemed to know that I was getting sick before I did.

It alerted me one weekend that my temperature was slightly elevated, and my sleep had been restless. It warned me that this could be a sign I was coming down with something.

I tutted something about the symptoms of perimenopause and ignored it - but two days later I was laid up in bed with gastric flu.

Source: BBC News

The UK needs a wealth tax

'Pay your tax now here!' sign. Sign, Harlingen, Texas. 1939. Photographer Lee Russell

Polly Toynbee, writing in The Guardian, argues that we need a wealth tax in the UK. In my opinion, it’s massively overdue. The only people who have benefited from the financial crisis and Brexit are those who were already well-off.

Given that you don’t get to choose how wealthy your parents are, advantages that you get in life from their wealth are a massive impediment to social mobility. Obviously.

Over the next 30 years, an unprecedented avalanche of £5.5tn will land in the laps of those who have chosen their parents wisely. The inheritocracy is ascending into the stratosphere: asset-rich parents are buying homes and advantage for their children and their children’s children, securing ever-rising privilege. Those born in the 1980s are on average due inheritances worth twice as much as those born in the 1960s. Parental income and wealth is a stronger predictor of someone’s lifetime earnings and wealth than in generations before. Inheritance is becoming an obstacle to social mobility.

No politician concerned about inequality, fair opportunities or financing the public realm can ignore wealth any longer. While wages have stagnated for 16 years, wealth has accelerated. Traditionally, policymakers have focused on fairness of incomes. But today, the possession of wealth is proving the greater distortion, with so much of it in effect untaxed. The mantra for a long time was that wealth taxes don’t work. But that can no longer be the answer.

[…]

If Labour wants to achieve things in power, it’s clear it needs more money. Wealth is the place to look.

Source: The Guardian

Image: The New York Public Library

Self-hosting isn't a thing for regular people

A laptop on a desk showing code on the screen

Perhaps I’m just getting old, but this rant resonated with me. Ostensibly, it’s about a particular app shutting down, but I’m quoting the bit more generally about ‘self-hosting’ not being an option for people who do things other than spend every waking moment near a computer.

If there’s one thing I don’t want to be in my forties it’s a system administrator. As time moves on, we abstract away from complexity to provide things as a service, which as far as I’m concerned is exactly as it should be.

One of the other options Omniverse suggests for moving off of its service is self-hosting, which is akin to telling me to go fuck myself. Self-hosting is great if your hobby is self-hosting things. Mine is not. My hobbies are reading things and drawing things and sewing things and climbing up things and feeling guilty about not writing enough things. I very much appreciate that I know how to computer well enough that I could self-host if I had to, could go fork some abandoned Obsidian plugin that hasn’t been updated in 3 years to try and make yet another rotting part of my digital ecosystem rot a little bit less slowly, but that is a terrible use of my time. I already host my own Fediverse server, if by host you mean pay someone in Europe a bunch of money to host it for me and all I have to do is ban some assholes occasionally, because at the moment I have more money than I have time and I simply do not wish to spend my one wild and precious life learning how to configure goddamn Sidekiq to optimize background processing queues just so I can offer my friends a refuge from the dillweed who turned Twitter into a Nazi bar.

Also, you know what people never talk about when they talk about self-hosting? A succession plan. If I suddenly died I don’t have any provisions for making sure the people relying on my little Hometown server aren’t suddenly left up a creek without a paddle. I am not going to host a read-later service just for myself because that would be an incredibly inefficient use of time and resources even if I did have the time and inclination to do so, but I am also not going to host anything else for my friends until I figure out what contingency plans look like. It’s on my list of things to figure out for my will, which is a very long list. This long list sits on another very long list of life TODOs that I never seem to get around to. I have wanted to figure out my will for approximately eight years, and I know that because that is how long ago I got married and we were like “ha ha we should do that soon” and then simply never did. Because life is so complicated, my guy.

Source: The Roof is on Phire

Image: Christopher Gower

On 'billionarism'

Hand holding several $100 bills on fire

This is a long-ish post, the second half of which discusses Bluesky. However, I’m more interested in the first half which talks about the ‘billionarism’ ideology which seems to have infected the world. It’s an anti-civic attitude which captures human value as financial value, and sees contributing to society as a ‘cost’.

Well kids, we live in a world built for billionaires and narcissists, and we pretty much have our whole lives. I know this isn’t news to people of awareness such as yourselves, but the proof is in the pudding now, and the pudding is on fire.

[…]

A “billionaire” is somebody who infects themselves daily with the sick need to amass so much money that they no longer are constrained by societal demands and expectations, but rather are able to impose their own demands and expectations upon society. A billionaire utilizes this power in order to modify society so that even more money comes to them, leaving them even further above the demands and expectations of society, allowing them to impose their will over society to an even greater extent, and so on and so forth. It’s a grave sickness, billionairism—a self-inflicted one, I think I mentioned—and what’s worse is that it’s a sickness whose worst symptoms billionaires force the rest of us to suffer. That’s the first way that billionairism resembles narcissism.

And we should talk about what billionaires do. What they do is get themselves in proximity to the natural value generated by our natural human system—value created by humans simply by being human and living together in proximity to one another, the structure known as “society,” in other words, from which all possible value springs—and then steal it for themselves and only themselves. They capture certain parts of natural generative human value—some human enterprise or process or concept, something other that has been made possible only through the existence of human society, and has only become successful within that context by generating value for other humans—and use their unnaturally stolen value to own it, and then pervert it so that it increasingly stops providing value to others but only sends value to themselves. They then use the stolen value that they have hoarded all for themselves as proof that they are exceptionally valuable people. Meanwhile, there is a certain amount of that stolen value that they still have to give back in order to keep the mechanism of their theft going—an amount that they call “cost,” which they greatly resent—which they use as proof that they are the beneficent source of all value anyone receives.

[…]

It occurs to me that nothing damages billionaires and other narcissists like human community, which is probably why they work so hard to destroy it.

Source: The Reframe

Image: Jp Valery

Visual music discovery

Screenshot of BBC Orbit

I’ve got an upcoming interview for a BBC R&D research role and so have been looking at what they’ve been up to recently. As part of that, I came across an experiment called ‘Orbit’ which is a visual way of discovering new music based on listening to audio snippets.

Find 5 new tracks every day from undiscovered artists

Source: Orbit

The Australian ban on social media is probably unworkable

Illustration of a teenager holding a tablet it's connective tissue coming out of their eyes and going into the screen.

This post by Neil Brown is a couple of years old now, but he linked to it in the wake of news that the Australian government have announced a ban on social media for under-16s.

As he points out by going through examples, this is entirely unworkable. When I tried to explain this to someone less technical, I realised that unless you enforce biometrics at every login, it’s pretty much impossible to enforce in a good-faith way. And that would be a huge privacy violation of children.

Let’s say that there a multiple countries with similar-but-not-identical laws.

  • Country A, which says that the website operator is not to provide its service to people in Country A who are under 21.

  • Country B, which says that the website operator is not to provide its service to people in Country B who are under 25.

  • And Country C, which says that the website operator is not to provide its service to people in Country C at all.

Assuming that the website operator does not want to shut up shop totally - by applying the most restrictive rule, of Country C, to everyone - and that it does care about the laws in other countries (a big “if”, but it’s my example, so…) how does the website operator establish where the user is located at the point at which they access the site, to know which rule to apply?

tl;dr

I don’t think you can, with any reasonable degree of assurance.

Source: Neil’s blog

Image: julien Tromeur

AI identifies more Nazca Lines

Nazca Lines

A great use for machine learning in finding, and hopefully helping to protect, indigenous art covering miles of ground in southern Peru.

Gouged into a barren stretch of pampa in southern Peru, the Nazca Lines are one of archaeology’s most perplexing mysteries. On the floor of the coastal desert, the shallow markings look like simple furrows. But from the air, hundreds of feet up, they morph into trapezoids, spirals and zigzags in some locations, and stylized hummingbirds and spiders in others. There is even a cat with the tail of a fish. Thousands of lines jump cliffs and traverse ravines without changing course; the longest is bullet-straight and extends for more than 15 miles.

The vast incisions were brought to the world’s attention in the mid-1920s by a Peruvian scientist who spotted them while hiking through the Nazca foothills. Over the next decade, commercial pilots passing over the region revealed the enormousness of the artwork, which is believed to have been created from 200 B.C. to 700 A.D. by a civilization that predated the Inca.

The newly found images — an average of 30 feet across — could have been detected in past flyovers if the pilots had known where to look. But the pampa is so immense that “finding the needle in the haystack becomes practically impossible without the help of automation,” said Marcus Freitag, an IBM physicist who collaborated on the project.

To identify the new geoglyphs, which are smaller than earlier examples, the investigators used an application capable of discerning the outlines from aerial photographs, no matter how faint. “The A.I. was able to eliminate 98 percent of the imagery,” Dr. Freitag said. “Human experts now only need to confirm or reject plausible candidates.”

Source: The New York Times

Backup link: Archive Buttons

EV batteries live way longer than assumed

Graph of EV battery residual value loss over lifecycle stages: First-Life, Second-Life, and Recycling.

Last year, I leased an electric vehicle (EV) for the first time: a Polestar 2. It’s incredible; I wouldn’t even consider any other type of car in future.

I don’t have to worry about how long the physical batteries will last, but it’s something that EV skeptics, conspiracy theorists, and fossil fuel lobbyists tend to focus on. That’s why it’s good to see the management consultancy P3 conducting a study to counter some EV battery myths.

Scatter plot showing SoH of EV batteries versus mileage with Aviloo and P3 data points and trend lines.

The term ‘state of health’ or SoH doesn’t have a standard definition, so it’s just used in this context to refer to EV battery capacity. The findings? Basically that EV batteries last way longer than was assumed. And, given that they can have a second and even third life after being removed from a car, there’s really no reason not to switch to an EV, pronto.

The field data suggests that the actual battery capacity is maintained longer than assumed under real-life conditions, especially with the often-cited high mileages of 200,000 kilometres and more. Based on the cell lab tests, the SoH model published by P3 in 2023 gave a much more pessimistic forecast for battery health. Up to around 50,000 kilometres, the laboratory model and the field data are roughly the same – above 100,000 kilometres. However, the trend lines diverge significantly. P3 concludes that the actual user-profiles and the control of the cells by the battery management system in the field significantly reduce ageing.

But how can the observed variation be explained? After all, some vehicles still have an extremely high SoH after more than 50,000 kilometres, while individual vehicles are still at 98 per cent after almost 200,000 kilometres – while others quickly fall below 90 per cent. In fact, the charging and usage behaviour of drivers and the vehicles themselves influence this, as do the manufacturers. On the one hand, the intended buffer (i.e., the difference between gross and net capacity) plays an important role in terms of size and utilisation of the buffer. That is because it can be used, for example, to reduce the noticeable ageing during the warranty period – by releasing a little more net capacity over time. On the other hand, the charging behaviour can be adjusted via a software update. On the one hand, this can be a higher charging power for shorter charging times, which leads to more stress in the cell. On the other hand, it is also possible that an update improves the control of the cells, for example, by optimising preconditioning to reduce stress during fast charging under sub-optimal conditions.

Source: electrive

From the 'everything fun is also bad' department

Photograph of someone holding a smartphone, playing Pokemon Go

Data, or rather organised data is an incredibly valuable commodity in the modern world. Large Language Models (LLMs) which underpin the latest generative AI applications need ever-increasing amounts of it to develop more complex functionality.

Pokémon Go was controversial when it came out because there were so many people playing it that it was causing chaos when so-called ‘gyms’ and game characters were randomly placed in various real-world neighbourhoods. Now it transpires that the makers of the game are developing the equivalent of an LLM for ‘visual positioning’ and that this might be used for military applications. FML, as they say.

Uh, so here’s something interesting. Niantic, the company behind Pokémon Go, published a long blog post last week outlining a new project they’ve been working on called a Large Geospatial Model, essentially a Large Language Model but for visualizing and mapping physical space. They’re calling it the Visual Positioning System, or VPS, and they plan to use it for future augmented reality products and robotics. The idea of mapping the whole world has been a big priority for Niantic over the last few years.

One new feature for Pokémon Go that uses VPS is called Pokémon Playgrounds and it lets a user place a virtual Pokémon on a location and other players will find that Pokémon where they left it.

Though, as Elise Thomas, over at the Institute for Strategic Dialogue, pointed out, it seems almost undeniable that this will not just power fun game mechanics. “It’s so incredibly 2020s coded that Pokemon Go is being used to build an AI system which will almost inevitably end up being used by automated weapons systems to kill people,” Thomas wrote.

Source: Garbage Day

Image: David Grandmougin

Tuvalu's Digital Twin

LIDAR scanning of Tuvalu by satellite

I initially thought this announcement from Tuvalu was from this month’s COP meeting, COP29. But it turns out that it was announced two years ago, and the update below was announced last year, at COP28.

It’s a sad but fascinating prospect: a nation without land, preserved digitally and with services available to the Tuvaluan diaspora after climate change means their physical territory disappears beneath the waves.

This is a more extreme version of Estonia’s e-Residency programme which was launched a decade ago. In that case, the threat was from other nation states, namely Russia.

I remember quite a few years ago at the Thinking Digital conference, just as the cryptocurrency craze was beginning, someone stood on stage and predicted the death of nation states, with people instead choosing digital nationhood. I don’t think it will be as binary as that. It’s much more likely to be something akin to dual nationality.

With time running out, Tuvalu has no choice but to start planning for this worst-case scenario. At COP27 (2022), Tuvaluan Minister Simon Kofe announced that Tuvalu will become the First Digital Nation: that it would digitally recreate its land, archive its rich history and culture, and move all governmental functions into a digital space.

This digital transformation will allow Tuvalu to retain its identity and continue to function as a state, even after its physical land is gone. It will also facilitate the governance of a Tuvaluan diaspora by creating a virtual space where Tuvaluans can connect with each other, explore ancestry and culture, and access new opportunities for business and commerce in various industries. Moreover, a permanent digital replica of Tuvalu – a new “defined territory” – will aid in the fight for continued sovereignty under international law.

Since the initial announcement of the First Digital Nation, Tuvalu has:

  • Completed a comprehensive three-dimensional LIDAR scan of all 124 islands and islets, laying the foundation for its digital nation and helping redefine its territory in the eyes of international law.
  • Begun upgrading its national communications infrastructure with the installation of two submarine cables, ensuring sufficient bandwidth for the transition to the cloud.
  • Started exploring a digital ID system, which will use the blockchain to connect the Tuvaluan diaspora and allow them to participate in Tuvaluan life, wherever they are.
  • Begun building a living archive of Tuvaluan culture, curated by its people. Citizens will be invited to contribute their most treasured personal items for digital preservation, creating a living record of Tuvaluan values.
  • Amended its constitution to reflect a new definition of statehood – the first of its kind in the world. The amendment pronounces that the State of Tuvalu within its historical, cultural, and legal framework shall remain in perpetuity in the future, notwithstanding the impacts of climate change or other causes resulting in loss to the physical territory of Tuvalu.

Source: Tuvalu.tv

We don’t write things down to remember them. We write them down to forget.

Close-up of tablet screen showing app icons including Reminders, YouTube, and Notes with device status indicators.

My workflow for Thought Shrapnel is roughly: come across interesting article, save it to Pocket, revisit and write about it. There are plenty of articles that I don’t write about, and I sometimes go on hiatus from this blog for a while.

As I get older, I don’t really understand the desire to capture all of the things and link them together. It can become fetishistic, and obession. I seem to do alright in my personal and professional lives in terms of remembering stuff and combining them in new and interesting ways. And don’t particularly have a ‘system’. I just remember stuff that I’ve written about, and especially when I’ve written about it multiple times.

This article talks about the freedom of forgetting stuff. We’re loathe to let things go into the ether because we ascribe value to the things we’ve collected. However, I suppose because I’ve collected, written, and jettisoned so much stuff in my life, I’m very comfortable in getting rid of it. I don’t need a million tabs open, a bookmarks manager stuffed with links, or a meticulous system. My approach is based on curiosity, interest, and writing about stuff.

That’s the true value of notebooks, notes apps, bookmarking tools, and everything else built to help us remember. They’re insurance for ideas. They let us forget.

[…]

We need to forget, but we first must feel safe forgetting.

[…]

We didn’t need bookmarks and notes as much as we needed the safety of letting go. Anywhere we could save our thoughts was enough.

Source: Reproof

Image: Omar Al-Ghosson

Scrolling on your phone is not a hobby

A Twitter screenshot discussing hedonism and abstention by users 'MED GOLD' and 'August Lamm.'
&10;
&10;Transcribed Text:
&10;
&10;MED GOLD 🐌
&10;@MedGold_
&10;
&10;The idea that we live in a hedonistic world is one of the biggest myths of our time. American culture is surveilled, sterile, joyless, and uptight. Being addicted to the internet should not be mistaken for a lust for life.
&10;
&10;August Lamm @AugustLamm · Jul 8
&10;I'm calling it right now: abstention is the next big thing. Sobriety, celibacy, digital minimalism, dumb phones, religion. The age of hedonistic hyper-consumption is over. We're moving into a new and peaceful age marked by moderation and self-discipline. I can’t wait.
&10;
&10;2:00 AM · Jul 9, 2024 · 190.4K Views

I came across this blog post this morning and I can’t stop thinking about it. I wish I’d seen it when it was published a few months ago.

The author gives it the provocative title The Mainstreaming of Loserdom, explaining that it seems to have become normal for people to not only admit to having “no hobbies, no interests, no verve,” but be positively “gleeful” about it. It seems that a trend that had already been set in motion was accelerated due to the pandemic.

I needed to share the post here because I’m conflicted about it. On the one hand, I’ve never had a particularly interesting social life — at least by other people’s standards. On the other, I’m one of the people the author talks about that creates stuff on the internet.

What I think we’ve got is more people online than ever before, and so a larger sub-section of people who are, if not clinically depressed, certainly acting in a way that gives off morose vibes. They’re living life through the lens of consumption, something which our economic system incentivises. After all, it’s difficult to monetise people just hanging out having a chat. Unless it’s a podcast, I guess 😉

It was clear twenty years ago that someone who rarely engaged with their peers, didn’t really have friends, and didn’t really leave their house wasn’t aspirational: they were odd.

I know what people are going to say: not everyone drinks, not everyone parties, we have social anxiety, everything is too expensive… People simply aren’t connecting the way they used to, and I won’t be the bad guy for pointing out that it doesn’t surprise me that people are desperately lonely while also saying their favorite hobby is… staying home.

[…]

I’ll also defend myself preemptively and say not everyone has the same threshold for social interaction, which again, is fine. My issue is that I do not believe that the millions of people engaging with these posts all have very literal tolerance for social interaction.

[…]

I’ve been on the internet for twenty years: I’ve been on fanfiction.net, I’ve been on Livejournal, I’ve been on Tumblr. I was surrounded by people who spent time alone, but they were creating. They were writing, they were generating, they were knitting and sewing and painting and dreaming. The specific activity I’m talking about is a lack of any of this. The people screaming from their rooftops about how they don’t go anywhere and don’t have any friends aren’t the same people writing 70,000 words of Harry/Draco smut, I’m sorry! I know my people, and this feels different. It feels more sinister. Posting fanfiction online is a bid for community. Scrolling on your phone is not.

Source: Telling the Bees

Transmission interrupted: signal lost

Static TV image

Forms of perceptual learning

Fencer standing near white painted wall

“The systems approach begins when first you see the world through the eyes of another” wrote C. West Churchman. Seeing things from another’s point of view is usually framed as ‘empathy’ but often what isn’t discussed is the effect that a change in perspective can have on a person themselves. This is sometimes colloquially and humorously referred to as “things we cannot unsee”. It’s automatic: the way we understand the world has changed.

Stephen Downes shared this recently-updated article from the Stanford Encyclopedia of Philosophy about a topic which I’ve only studied obliquiely. ‘Perceptual learning’ is about long-lasting changes in perception resulting from practice or experience, and can take four forms: differentiation, unitization, attentional weighting, and stimulus imprinting.

When most people reflect on perceptual learning, the cases that tend to come to mind are cases of differentiation. In differentiation, a person comes to perceive the difference between two properties, where they could not perceive this difference before. It is helpful to think of William James’ case of a person learning to distinguish between the upper and lower half of a particular kind of wine. Prior to learning, one cannot perceive the difference between the upper and lower half. However, through practice one becomes able to distinguish between the upper and lower half. This is a paradigm case of differentiation.

[…]

Unitization is the counterpart to differentiation. In unitization, a person comes to perceive as a single property, what they previously perceived as two or more distinct properties. One example of unitization is the perception of written words. When we perceive a written word in English, we do not simply perceive two or more distinct letters. Rather, we perceive those letters as a single word. Put another way, we perceive written words as a single unit (see Smith & Haviland 1972). This is not the case with non-words. When we perceive short strings of letters that are not words, we do not perceive them as a single unit.

[…]

In attentional weighting, through practice or experience people come to systematically attend toward certain objects and properties and away from other objects and properties. Paradigm cases of attentional weighting have been shown in sports studies, where it has been found, for instance, that expert fencers attend more to their opponents’ upper trunk area, while non-experts attend more to their opponents’ upper leg area (Hagemann et al., 2010). Practice or experience modulates attention as fencers learn, shifting it towards certain areas and away from other areas.

[…]

Recall that in unitization, what previously looked like two or more objects, properties, or events later looks like a single object, property, or event. Cases of “stimulus imprinting” are like cases of unitization in the end state (you detect a whole pattern), but there is no need for the prior state—no need for that pattern to have previously looked like two or more objects, properties, or events. This is because in stimulus imprinting, the perceptual system builds specialized detectors for whole stimuli or parts of stimuli to which a subject has been repeatedly exposed (Goldstone 1998: 591). Cells in the inferior temporal cortex, for instance, can have a heightened response to particular familiar faces (Perrett et al., 1984, cited in Goldstone 1998: 594).

Source: Stanford Encyclopedia of Philosophy

Image: CHUTTERSNAP

Llama 3 is only free to use until monthly active users exceed 700m

An illustrated person looks up at a large hazard symbol, which has a character representing data science and AI ‘standing’ next to it.

Amidst the drama around the WordPress project at the moment (which is, in my experience only a public version of what goes on behind the scenes of any major Open Source project) I was interested in a post by Matt Mullenweg.

I’ve been using Llama 3 on projects where it wouldn’t be appropriate to use OpenAI’s offerings, but I should have known that, given it’s from Meta, there would be some shenanigans. And so it proves.

I’ll not share the rest of the post, given Matt’s ‘ecosystem thinking’ seems a bit disingenuous given the spat he’s engaged in, but this bit shocked me.

Open Source, once ridiculed and attacked by the professional classes, has taken over as an intellectual and moral movement. Its followers are legion within every major tech company. Yet, even now, false prophets like Meta are trying to co-opt it. Llama, its “open source” AI model, is free to use—at least until “monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month.” Seriously.

Excuse me? Is that registered users? Visitors to WordPress-powered sites? (Which number in the billions.) That’s like if the US Government said you had freedom of speech until you made over 50 grand in the preceding calendar year, at which point your First Amendment rights were revoked. No! That’s not Open Source. That’s not freedom.

I believe Meta should have the right to set their terms—they’re smart business, and an amazing deal for users of Llama—but don’t pretend Llama is Open Source when it doesn’t actually increase humanity’s freedom. It’s a proprietary license, issued at Meta’s discretion and whim. If you use it, you’re effectively a vassal state of Meta.

When corporations disingenuously claim to be “open source” for marketing purposes, it’s a clear sign that Open Source is winning.

Source: Ma.tt

Image: Managing Data Hazards by Yasmin Dwiputri & Data Hazards Project

The work to do the work

Flowchart explaining various stages of work in a project, including preparation, execution, and additional unforeseen tasks.

Abi Handley shared the above image on LinkedIn from a web developer who, back in 2022, worked out all of the time they spent on a project. Unsurprisingly, as anyone who has ever led a project will know, it’s the “work to do the work” which takes the most time.

When you’re younger, enthusiasm, energy and naivety tend to get you to the end of a project. When you’re in your forties, like me, it’s process. This post talks about running a ‘postmortem’ but we insist on pre-mortems as well as retrospectives. We minimise ‘status update’ meetings, using tools such as Trello to track task completion and Loom to explain things that would take too long via email.

Additionally, some people seem to think that being ‘professional’ means not bringing your emotions to work. But emotion is what makes us human, and so acknowledging this and factoring it into to projects is one of the keys to running them successfully.

I had been aware during the project that there seemed to be a lot of “extra work”, but putting it down on paper highlighted the multitude of “invisible” tasks and challenges which every web development project has.

There were two common threads:

  • much of the work was the “work to do the work” rather than the “actual” work
  • most of the work was under- or un-estimated because it wasn’t the “actual” work

Source: Dave Stewart

About time to head south for winter

I don’t think this is a new ‘False Knees’ cartoon, but it’s a great one and gave me a chuckle, especially at this time of the year. My SAD light is out, and it’s chilly in Northumberland.

A four-panel comic of two bluebirds discussing the changing seasons and migration.

Source: False Knees

Ocean acidification approaches the boundary

A sea turtle swims in a coral reef in Hawaii. Ocean acidification, found to be on the brink of crossing a boundary into higher-risk territory, can affect coral skeleton formation.

I feel like this should perhaps be bigger news?

Boundaries that have already been exceeded have to do with climate change, freshwater availability, biodiversity, land use, nutrient pollution (such as phosphorus and nitrogen) and the introduction of synthetic chemicals and plastics to the environment.

Ocean acidification is one of the systems that has not yet crossed its planetary boundary, along with ozone depletion and aerosols in the atmosphere. But while ocean acidification is still in the “green zone,” the new report finds it’s trending in the wrong direction. Scientists now say this metric is on the brink and may cross out of the safe zone in the next few years.

Earth’s oceans absorb carbon dioxide from the atmosphere, providing a valuable carbon sink as humans burn fossil fuels. But this process also makes the oceans more acidic, which can disturb the formation of shells and coral skeletons and affect fish life cycles, per the report.

As ocean acidification approaches the boundary, scientists are particularly concerned about certain regions, like the Arctic and Southern oceans. These areas are vital for carbon and global nutrient cycles, “which support marine productivity, biodiversity and global fisheries,” the report says.

Source: Smithsonian Magazine

A Troll's Charter

Twitter logo in black

Given the groups who financed Elon Musk’s acquisition of Twitter, I don’t think it’s unreasonable to see the events relating to the platform over the past few years as an attempt to stifle progressive discourse.

It’s been seven years since I deleted 77.5k tweets I composed between 2007 and 2017. I could see the way the wind was blowing, even before Musk’s acquisition. The latest news is that blocked users will still be able to see the tweets of the person who’s blocked them, which is just a troll’s charter.

If, for some reason, you’re still on there, perhaps it’s time to leave?

X will now make your posts visible to users you’ve blocked. In a reply on Monday, X owner Elon Musk said the “block function will block that account from engaging with, but not block seeing, public post.”

[…]

Musk has been vocal about his dislike of the block button. Last year, he said the feature “makes no sense” and that “it needs to be deprecated in favor of a stronger form of mute.” He also threatened to stop letting users block people on the platform completely, except for direct messages.

Source: The Verge

Image: BoliviaInteligente

A countercultural perspective to the capitalist notion of 'productivity'

A black and white photograph of two nuns walking next to some trees

This article examines how religious communities, particularly nuns and monks, approach productivity differently from the modern, output-driven culture. It highlights how members of religious orders redefine productivity as rooted in spiritual fulfilment, sufficiency, and human connection rather than constant work and economic gain.

These experiences suggest that true productivity lies in fruitfulness and grace, not in relentless efficiency, which offers somewhat of a countercultural perspective to the capitalist emphasis on always doing more.

We are conditioned to listen to podcasts while washing up, read books on the commute and dash out emails while drinking a morning coffee. I can’t even ‘just’ watch a Netflix show without needing something else to do, so resort to doing cross stitch in front of the TV in order to put my phone down. This is the efficiency for which we congratulate ourselves, getting more done in the same time. I draw the line at the growing trend for listening to podcasts at double speed to inhale the same information more efficiently, less fruitfully.

When I first raised the idea of writing this piece, and put out the rather niche call for nuns, priests and monks willing to be interviewed about productivity culture, I was struck by the number of responses from people desperate to read it. The desire for wisdom about life and work that isn’t geared just towards increasing the latter is real.

There were points in every one of the conversations I had with Sister Liz, Sister Gabriel, Father Thomas and Father Sam, in the middle of my working day, that felt like a mirror being held up, both gently and painfully, to the busyness and imbalance of my own life. If Melville was right that nothing is what it is except for contrast, then the lessons of the religious life for those of us grappling with the need to be ‘productive’ are surely our greatest example.

Source: THEOS

A landscape of havoc and fracture

Illustration of a Chimera entity with multiple heads and arms, flanked by repetitive assembly line workers on computers, symbolizing human data processing in AI systems.

The last paragraph of this post by Julian Stodd, which I discovered via OLDaily, points to something emancipatory about generative AI that I think some people may have missed:

An interesting feature of the Generative AI revolution is that whilst the technologies themselves are monumental, both in terms of complexity and physical energy and scale, it may well be individuals, at scale, who drive the true change. Not a single technology that breaks in, but rather people breaking out. Breaking out of restrictive and constrained structure.

Stodd is part of a doctoral programme, and (with no lack of hyperbole) discusses how his cohort is likely to be “the last to really read books… to really write for myself… to be confused and lost in thought.” He calls this a “landscape of havoc and fracture” and points to four dimensions of this shift:

  1. Dialogic Engines – synchronous iteration and exploration of ideas, warping legacy ideas of trust, self doubt, foolishness, failure, and curiosity. As we wrote in ‘Engines of Enagement’, Generative AI makes high quality dialogue a commodity, but not simply as a service – it shifts the social context of such. So we can be in dialogue as a solo feature, removing all social judgement of curiosity and ignorance, if we dare.

  2. Agentic Retrieval – not just a search engine, but a context setting system. These tools can shift the boundaries of context – not telling us what we asked for, but giving us what we may need. And from a perspective of virtually unbounded knowledge. We can factor this into our dialogue – asking for breadth and challenge to our thinking – or we may find it just lands. I think that systems shifting context is highly significant, as the fracture and evolution of context is a key part of insight and even paradigmatic change.

  3. Trans-disciplinarity as the norm: our taxonomies of knowledge are not natural, but rather shaped by legacy mechanisms of need, discovery, ownership, and understanding. We have tended to segment our knowledge and hence structures of learning (as well as power, status, and identity) vertically around these themes. So we have engineers and poets, but not many poetic engineers. I think Generative AI changes this in significant ways, if we allow it to: permits a broadening of vocabulary and conception, a translation engine if you like, but also a provocative one – if we ask or if it offers.

  4. The Primacy of Sense Making: I’ve said for some time that knowledge itself is shifting in the context of the Social Age, and Generative AI scales this change. The latest GenAI tools are Engines of Synthesis, reflection and contextualisation, leaving us in a radically broadened landscape of sense making as individual and collective feature. And I don’t think sense making per se is at threat of absorption by technology. Not that the Engines cannot make sense, just that our act of consumption is inherently linked to re-contextualisation and insight. In other words, if the technology has already chewed it over, we will chew it over again. It just broadens the space and foundations for us to do so.

I’ve been using GPT-4o for my MSc and it’s so much better and deeper to learn with an assistant than to rely on what’s provided to you as a student, and what you can discover by just wading through books and articles.

Source: Julian Stodd’s Learning Blog

Image: User/Chimera by Clarote & AI4Media

Leadership, gender, and 'abusive supervision'

Headless man in a white shirt, dark trousers, and brown shoes, suspended in mid-air with scattered white sheets of paper, against a red background.

Prof. Ivona Hideg writes about a study she carried out during the pandemic around men and women leaders. While both experienced higher levels of anxiety, the amount of ‘abusive supervision’ was lower in women. The study was limited in terms of gender identification and sexual orientation, but it’s still interesting.

For me, this study supports what I have experienced in my career to date: women tend to be better at regulating their emotions, which the exact opposite of the stereotype of women in leadership positions.

In our research, we investigated 137 leader-report pairs working in Europe (primarily the Netherlands) in the service (38%), public (28%), or information and technology (23%) sectors during the early phases of the pandemic in 2020. The majority of leaders were men (56%), Dutch (59%), white (92%), and heterosexual (95%). The majority of direct reports were women (56%), Dutch (60%), white (89%), and heterosexual (88%). These leaders reported their emotions during the pandemic; their reports then rated their leaders’ behaviors.

Women leaders reported higher levels of anxiety regarding the pandemic than men leaders. There were no gender differences in feelings of hope toward the pandemic. When leaders’ anxiety was higher, so was their abusive supervision, whereas when leaders’ hope was higher, so was their family-supportive supervision. Critically, supporting our hypotheses, we found that these relationships between leaders’ emotions and behaviors depended on their gender. Leaders’ emotions were only related to their leadership behaviors if they were men, but not if they were women.

Namely, in line with gender role and emotional labor theory, women leaders engaged in low levels of abusive supervision regardless of how anxious they felt about the pandemic. By contrast, men leaders engaged in more abusive supervision, including behaviors such as being rude, ridiculing, yelling at, or lying to their reports when their anxiety was higher. Women leaders also provided high levels of family-supportive supervision irrespective of how hopeful they felt about the pandemic. By contrast, men leaders provided family-supportive supervision only when they felt more hopeful.

Source: Harvard Business Review

Water use literacy

The image is a comprehensive visual diagram titled 'Water World,' depicting the distribution and usage of water on Earth. The design employs colored circles and lines to illustrate different categories of water, their proportions, and their utilization.

We’ve just started on a Mozilla-funded Friends of the Earth project at the moment around sustainability principles for AI. There seems to be a lot of noise around the amount of water employed to cool the data centres used to train large language models (LLMs).

While we should always be cognisant of the amount of the energy and water used to provide us with new (and existing) technologies, I think there’s a lack of statistical numeracy going on here. For example, in the UK, 51 litres of water per person are lost due to leakage every day. That’s over a trillion litres per year!

Alan Levine shared a link to this visualisation in the thread where I was discussing this stuff on the Fediverse.

Source: Information is Beautiful

Against cyberlibertarianism

Characters looking like Putin, Trump, etc. destroying a building named 'Democracy'. It's framed as a 'new Olympic sport'.

A long-ish and important post by Paris Marx in which he argues for a middle path between the ‘cyberlibertarianism’ of Silicon Valley and the China firewall approach. Just as the laws in most countries have a common based but a different flavour, so I think we’ll see an increasing alignment of what’s allowed online with what’s allowed offline in various jurisdictions.

Instead of solely fighting for digital rights, it’s time to expand that focus to digital sovereignty that considers not just privacy and speech, but the political economy of the internet and the rights of people in different countries to carve out their own visions for their digital futures that don’t align with a cyberlibertarian approach. When we look at the internet today, the primary threat we face comes from massive corporations and the billionaires that control them, and they can only be effectively challenged by wielding the power of government to push back on them. Ultimately, rights are about power, and ceding the power of the state to right-wing, anti-democratic forces is a recipe for disaster, not for the achievement of a libertarian digital utopia. We need to be on guard for when governments overstep, but the kneejerk opposition to internet regulation and disingenuous criticism that comes from some digital rights groups do us no good.

The actions of France and Brazil do have implications for speech, particularly in the case of Twitter/X, but sometimes those restrictions are justified — whether it’s placing stricter rules on what content is allowable on social media platforms, limiting when platforms can knowingly ignore criminal activity, and even banning platforms outright for breaching a country’s local rules. We’re entering a period where internet restrictions can’t just be easily dismissed as abusive actions taken by authoritarian governments, but one where they’re implemented by democratic states with the support of voting publics that are fed up with the reality of what the internet has become. They have no time for cyberlibertarian fantasies.

Counter to the suggestions that come out of the United States, the Chinese model is not the only alternative to Silicon Valley’s continued dominance. There is an opportunity to chart a course that rejects both, along with the pressures for surveillance, profit, and control that drive their growth and expansion. Those geopolitical rivals are a threat to any alternative vision that rejects the existing neo-colonial model of digital technology in favor of one that gives countries authority over the digital domain and the ability for their citizens to consider what tech innovation for the public good could look like. Digital sovereignty will look quite different from the digital world we’ve come to expect, but if the internet has any hope for a future, it’s a path we must fight to be allowed to take.

Source: Disconnect

Image: Tjeerd Royaards

We look through screens rather than at them

Phone screen with reflected colour

I don’t know if you’ve ever been to the place where a famous artist, or musician, or writer was born/worked/died? Although it might be interesting on a surface level, the likelihood is that whoever it was escaped their environment into a world of imagination.

Less tedious to look at are artefacts such as notes, scribbled ideas and marginalia. What happens to all this, though, with a purely digital workflow? What will future historians have to work with? I’m guessing famous writers are similar to me: I don’t write letters to my wife, I sent her messages on Signal; I don’t scribble down ideas on scraps of paper, I make digital notes; I don’t scribble in books; I highlight sections on Kindle or Google Books.

More importantly—for a biographer or anyone trying to tell a good story—the digital version of a hastily scribbled note pinned to the apartment door is less tangible and thus harder to romanticize. Text messages still don’t evoke adventure, even if they are the invisible engine behind most of what happens. They inherently violate the “show don’t tell” rule; they are all telling and no showing. […]

Analog media may not convey information as efficiently, but it has other benefits that may be easier to appreciate in hindsight. It is more decorative. It furnishes the physical environment in a way that digital technology—always evolving toward smaller, smoother, and lighter—does not. Or to put it another way: When digital technology is visible it’s usually because it failed to be invisible. The exception to this is the ever-present screen, which remains visible by definition, obviously; screens now account for nearly all of a computer’s tangible presence in the world. And screens are the exception that prove the rule because, as Byung-Chul Han has noted, we look through them rather than at them. Screens don’t decorate the physical environment so much as they invite us to stare through a window into a different kind of non-place.

Source: Kneeling Bus

More is always more where 'kitchen lipstick' is concerned

Illustration of a person's face surrounded by colorful bursts and stars coming from a white pot.

I’m a big fan of srirarcha sauce, so this ode by Jay Rayner to ‘lifting’ ordinary dishes with the addition of things you find in cupboards and fridges, spoke to me. My tips? Try coconut in your porridge, and balsamic vinegar (or pesto) on your next pizza.

Where dinner is concerned, God is always in the detail. By this, I mean the kind of dinner you scarf by yourself when it’s so late it’s almost early; the thing you eat when nobody is watching and the options are meagre but you still regard yourself as a person of high gastronomic standards, who sees the lowliest of food items as merely the opening salvo in a negotiation.

Which is how I found myself one night pelting a chicken and mushroom Pot Noodle which just happened to be lurking in the cupboard, with freshly sliced spring onions and batons of ginger, shiny black ribbons of finely chopped toasted nori and dollops of sriracha sauce and crispy chilli oil. And lo: the humble instant noodle has been elevated to the king of snacks, courtesy of my exquisitely honed culinary sensibility, and my endearing conviction that more really is always more.

[…]

A friend of mine describes doing all this as adding “kitchen lipstick”. I get her point: it’s the application of seemingly small details which vastly elevate the otherwise everyday. The original purchase suggests questionable taste. The adornments and embellishments restore one’s sense of self. Perhaps right now you have lurking in the fridge a pot of that grim corner-shop hummus, looking to stunt double as tile grouting? Why not go the full Ottolenghi and decorate it with toasted pine nuts, a thick dusting of smoked paprika, an extra dribble of that grassy olive oil over there and, for a final flourish, finely chopped flat-leaf parsley? Add fancy whole grain mustard and manuka honey to the cheapest of sausages, and glugs of madeira and a spoonful of dijon to instant gravy.

Source: The Guardian

Isolated places in the Lake District for wild camping

Tent with mountains in background

I took this photo on Friday night, just after setting up my tent in Eskdale. This is like the land that time forgot on the other side of the Hardknott Pass and somewhere that tourists to the Lake District seldom visit.

Wild camping, while officially illegal in England, is tolerated if you stay out of the way and use some common sense. Before I went, I looked at some other spots, and so wanted to share those with you for your enjoyment.

If you dream of a serene night by a gentle river, look no further than Eskdale. This elongated valley with the River Esk running through it offers perfect wild camping spots with stunning vistas of Scafell and Woolpack Point. During the day, Hardknott Roman Fort and Stanley Ghyll Waterfall are within reach.

Source: IDEAL magazine

Better Images of AI

A grid of three images, each depicting a bowl of strawberries, a small glass bottle of milk, and a single strawberry on a clean, white surface. The images progressively become more pixelated from left to right.

Have you noticed that most news articles, blog posts, and social media updates that talk about AI use weird robots and unconvincing imagery? This website, which I discovered via Anne Hilliger seeks to address that.

There’s a wide range of Creative Commons-licensed options which I’ll be using to accompany stuff I write about AI — including this post!

Images representing AI as sentient robots mask the accountability of the humans actually developing the technology, and can suggest the presence of robots where there are none.

Such images potentially sow fear, and research shows they can be laden with historical assumptions about gender, ethnicity and religion.

However, finding alternatives can be difficult! That’s why we, a non-profit collaboration, are researching, creating, curating and providing Better Images of AI.

Source: Better Images of AI

Image: CC BY Catherine Breslin & Rens Dimmendaal

AI and community communication

People sitting on the floor round a table.

Stephen Downes has written a couple of articles relating Generative AI (GenAI) and Open Education Resources (OER). In the first one, he responds to a blog post by Heather Ross, who argues that the “sould of open is in danger.” Downes is having none of it, and responds to her point by point.

I’m in agreement with all of it, and particularly with how ‘gatekeep-y’ (my word) the OER community can be. (I say this with deep love and respect for what the OER community has achieved, but it is somewhat insular and ivory tower-focused.)

There are a few people who have created a cottage industry for themselves by opposing every aspect of artificial intelligence. I think they’re wrong, and have concerns about them misleading educators about AI. But Heather Ross’s article takes it a step further.

This is colonialism:

“No, you don’t get to wash over or destroy the work we’ve done and the great work still to come within the open movement.” If those encouraging the use of GenAI for open or for GenAI to replace open want to play a new game, that’s fine. We can’t stop you, but get off our field.”

It’s not your field.

And then, in the second post, which contains too many excellent and nuanced thoughts to summarise adequately, Downes sets his sights on what we mean by ‘free learning’. I’m sharing the part about colonialism because I think he gets to the nub of the problem: what people are often complaining about when they’re complaining about AI as ‘colonialist’ is that they are being colonised.

This is, of course, a problem, but the underlying issues are much more structural than people usually think. As Downes points out, what is necessary here isn’t to merely perpetuate the mindset of ‘giving’ people education, but rather to find ways “for a community to communicate with itself” in ways that reduce their reliance on other (usually more powerful) communities and interests.

The only real difference between what we’ll call ‘AI colonialism’ and ‘Good Old Fashioned Colonialism’ is in who is being colonized and who is doing the colonizing. In the case of GOFC, it was one nation colonizing another. In the case of AIC, it is one sector of the economy colonizing the rest. Though if we pause and consider for a bit we’ll find it’s not so different after all: in most societies, developed and otherwise, there is a structural colonialism, where one wealthier sector of society extracts value from the other, and then sells (or in the case of charity, ‘gives’) it back as a value-laden alternative.

I am so sympathetic with those who are opposing AI on these grounds, though my charity is extended only grudgingly to those who have only recently made the switch from colonizer to colonized. And my real loyalties are with those who have always been colonized - not only those in Eswatini (who have to their credit have resisted colonization better than many) but also those in my own society and those like mine, who contribute with their language(s), system of laws, culture and traditions, social knowledge, values and beliefs, etc., and find an educational system - and knowledge economy generally - sold back to them, inevitably changed by the values and beliefs of those who performed the appropriation.

This is an unsustainable model. Over time, it not only reduces the wealth of the subjected population, it also reduces the capacity of the provider (or ‘donor’) community generate wealth without these inputs (one imagines that a company like Disney would flounder without the privilege to incorporate and repurpose Arab or Indigenous culture and folklore).

[…]

Whether or not AI succeeds as a technology is moot; neither blocking AI nor regulating the industry will alter the model of aggregation and exploitation that it exemplifies. The knowledge, learning and information industries will continue to exist, and with or without AI will continue to harvest community language(s), system of laws, culture and traditions, social knowledge, values and beliefs, etc., and in some fashion reshape them according to their own values and sell them back to the community.

And this brings us back to what, to my mind, is the real purpose of open educational resources. They represent a means, mostly (though not exclusively) through digital technology, for a community to communicate with itself, to gather and share knowledge, to pass along its values and mores, its ideas and beliefs, and to be able to do this without reliance on external knowledge, information and learning providers.

[…]

It - to me, at least - was never about giving people an education (or giving them rights, or freedoms or anything else). It was about people being able to create these things for itself.

Sources:

Image: tribesh kayastha

Migration → Adaptation → Carbon removal → Geoengineering

A line graph displaying different climate scenarios over time. The y-axis represents temperature (climate risk) in degrees Celsius, ranging from 1.5°C to 2.5°C. The x-axis represents time, starting from 'Today' and extending to an unspecified future.

I’ve already shared “technologist and climate geek” Ben James' most recent blog post about off-grid solar power. Digging into other posts unearthed one about his opinion that either “a country with no choice” or a billionaire will unilaterally start spraying sulphur into the stratosphere to help cool the earth. Or at least stop it heating so quickly.

Solar Radiation Management (SRM), as it’s known, is controversial, but has become less so recently. There’s an argument to be made that the $20 billion cost per year would be well worth it to buy us more time. I don’t know enough about this, but it’s clear that this is something that is likely to be on the table as a strategy, and probably won’t go through the UN or an international body first.

Sulfur disappears from the atmosphere quickly - it rains out after about a year. This means that once we’ve started SRM, it’s dangerous to suddenly stop. We need to keep spraying particles, all the time. If we suddenly stopped, the warming would spring back rapidly, causing a bad temperature shock. The correct way to stop is a gradual phase out.

Unfortunately, Solar Radiation Management (SRM) has some fairly gigantic problems.

  1. It doesn’t fix the root cause. Cooling the planet does not remove the CO2 which has accumulated in our atmosphere. It doesn’t stop that CO2 from acidifying the oceans and irreversibly destroying marine biodiversity.

  2. SRM could make us complacent about dramatically cutting emissions.

  3. Sulfur increases acid rain (harmful to many life forms) and will likely harm the ozone.

  4. If SRM is poorly implemented, it could dramatically change weather and rainfall patterns. For example, if sulfur is not injected near the equator, it will not evenly mix into the stratosphere, causing uneven cooling and heating.

[…]

Crucially, the biggest problems with SRM are probably not yet known. The side effects of putting sulfur into the stratosphere could be some of the most consequential unknowns in human history. Clearly, that’s a huge reason to do more research. But still - no matter how much research is done, when humanity tries to bend nature to its will, we can be sure that unintended consequences won’t be far behind.

[…]

People underestimate how controversial it can become to not do SRM. Imagine you are the leader of a country close to the equator. Crop failures, extreme heat, and city-destroying cyclones mean that your people are without drinkable water, have nowhere to sleep, and cannot feed their children. Mass social unrest and physical violence become normal for your country. SRM is the only action that you can take to turn off the disasters, and prevent your government being overthrown.

[..]

Ultimately, the decision to turn on the SRM machines will not be made by climate scientists, or carefully calculated risks. It will be made on the basis of nations rising or falling - by starving populations, revolutionaries, and leaders with their back against a wall.

[…]

Geoengineering is no replacement for getting our shit together. But there would be no honour in allowing the deaths of hundreds of millions of people, simply because they could have theoretically been avoided through more mitigation.

[…]

SRM might not make sense in your mind (it certainly doesn’t in mine). But do you view the world in the same way as a military dictator, “benevolent” billionaire, or leader of a starving country?

Source: Ben James

Some men just want to watch the world burn (and now there's research to prove it)

Stylised illustration of a person on fire in front of a building on fire

There’s a scene in one of my favourite films, The Dark Knight in which Arthur, Bruce Wayne’s butler, explains that “some men just want to watch the world burn.” There are some mighty fine memes as a result.

But it’s true. Sometimes it’s because they’ve got no power, so they might as well provoke something that might be entertaining. What have they got to lose. Other times, it’s because they’ve got all the power, and they have crazy theories about world overpopulation.

Either way, there’s new research into this mindset which shows that this is a psychological trait separate to others. I’m going to quote Brian Klaas at length, who explains in an excellent post.

These people, according to the new research, share a desire to “unleash chaos to ‘burn down’ the entire political order in the hope they gain status in the process.” This trait now has a name — and an established psychological profile.

It’s called the “Need for Chaos.” Understanding it provides an important insight into the destructive world of modern politics, in which the trolls have taken over, and politicians are no longer problem solvers, but are rather political influencers. It’s not about making the world better. It’s about burning down the world of people they hate.

[…]

In particular, people who score high on this metric tend to answer that they agree with several of these statements:

  1. I get a kick when natural disasters strike in foreign countries.

  2. I fantasize about a natural disaster wiping out most of humanity such that a small group of people can start all over.

  3. I think society should be burned to the ground.

  4. When I think about our political and social institutions, I cannot help thinking “just let them all burn.”

  5. We cannot fix the problems in our social institutions, we need to tear them down and start over.

  6. I need chaos around me—it is too boring if nothing is going on.

  7. Sometimes I just feel like destroying beautiful things.

Then, to make sure that people weren’t just ticking the box next to every question mindlessly, the researchers included two additional statements that were the opposite of the other seven:

  1. We need to uphold order by doing what is right, not what is wrong.

  2. It’s better to live in a society where there is order and clear rules than one where anything goes.

Interestingly, when they looked at other toxic personality profiles — such as psychopathy (being a psychopath) and social dominance orientation (an urge to assert social dominance) — they found that the Need for Chaos was a separate dimension to destructive individuals. It wasn’t just capturing the same impulse.

It’s a unique trait.

[…]

The Need for Chaos trait is particularly damaging for individuals who also feel that they’ve been failed by society, manifesting in their loneliness. For them, sowing chaos is a way to lash out against the system while asserting their power and trying to establish some form of social status.

[…]

That creates a strange dynamic, in which most white men—by virtue of their historically privileged position in society—tend to score lower on Need for Chaos than other groups. However, when white men do score high on Need for Chaos, it’s particularly dangerous. To put it plainly, the research suggests that of those who have this chaotic trait, it’s most destructive when that person is a white man.

[…]

The challenge for modern politics, then, lies with figuring out a way to deal with the inevitable perceived loss of social status that accompanies a society that’s becoming more equal, while mitigating the damage that these aggrieved chaos agents can inflict on everyone else.

Source: The Garden of Forking Paths

A State of Systems Shifting

A decade ago, I was going to so many in-person conferences that I had both a dedicated blog and Twitter account. These days, I attend rather less. No longer being on Twitter, and my conference blog long-ago being mothballed, I’m lacking a place to put reflections on events.

The purpose of this post isn’t even that, to be honest. I was just so blown away by Indy Johar’s presentation at the Systems Innovation Network conference today that I needed somewhere less ephemeral to put the notes that I managed to tap out with my thumb.

Don’t ask me questions about any of this. Not only am I still new to the whole world of systems thinking, but Indy seems to have a galaxy-level brain. Go and check out the Dark Matter Labs website.

Indy presenting at the conference

Situating the moment:

  1. Climate breakdown (not change, losing predictability and insurability - therefore access to capital markets)

  2. Mass multi-polar, multi-perspectival transition (different kinds of transitions in different parts of the world)

  3. Securitization of everything (pervasive in all of our conversations - everything driven by risk and security - energy, minerals, nutrient supplies)

Emergent term of ‘security economics’ changing market dynamics. ‘No transition without justice’ not simply a slogan, it’s important to be able to find a way forward (e.g. UBI or ‘universal basic nutrition’ experiments)

  1. Inequality and loss of solidarity

  2. Hugh interest rate environment - inflationary economic context. Going to see more shocks to the economic system.

Difficult to price the material economy because of volatility.

  1. Rise of environmental righy politics - localism, etc. will be co-opted by the far right. AfD / far-right of Conservative Party. Boundary words: who’s inside and outside the community.

Systems Practitioners shouldn’t use ‘community’ as too entangled. Be careful about language we’re giving power to.

  1. Labouring the transition - don’t have a labour force for the transition (great ideas, but can’t implement them).

Having to think about the constraints in the innovation landscape. Materially affecting our reality: UK can afford to build 14,000/year according to Paris Agreement carbon budget. Labour government has promised to build 350,000/year. Need to do things differently - open up new pathways (right to homes).

Persisting with illusions of infinite supply - instead we need to look at constraints because that’s where the innovation is.

  1. Flooding with information - c.f. McLuhan’s thought about confusing a system by flooding it with info. People just spot patterns.

Far right give you a meme to help you understand reality - they hijack a pattern analysis.

  1. Scale of the shift - only 7.2% of global economy is ‘circular’ and it’s declining. Need fundamental shifts in material economy.

  2. Volatility in the system massively increasing - energy costs, food costs

  3. New Allies - central banks, security services, intergenerational wealth, civil society. Need to have representatives from these types of organisations at this conference. New theories about asset ownership.

Westminster living in synthetic domain. Everyday politics to what we’re observing.

“Systems is about conversation not communication.” This means we can deal with more information than previously thought.

Security & Resilience of Systems

“Pre-emptive peace strikes” in places where there’s risk of systemic volatility.

Risk to whom? Rooted in assets and value, rooted in monetary frameworks. Preservation of power.

Uncrystalised risk in the system. If you put the risks on the balance sheets, the organisations aren’t solvent anymore. No longer viable. Collision and corruption therefore becomes a systemic risk - interested in survival.

(e.g. of Kristallnacht and insurance companies not paying for broken windows but instead paying ‘force majeure’ money to Nazi Party)

Explosion of sovereignties - more of an agentified world view. People don’t ‘assign’ their sovereignty to the state as they did in the past. Multitude of sovereignties.

Need to work beyond democratic renewal systems - legitimacy? “States are not the public”

Systems scaffolding - who owns the solution for portfolio (unless the system wants to implement, just remain as sticky notes). Need to work as system capabilities level.

Trans-systems work. Structural systems transformation.

Constraints - key shifter of innovation space.

‘Trap’ of the system boundary and the other. Need to build new language. Different dynamics to bounded models.

Systemic gap in price and value. Unpriced value in the system - going to be something that organised a lot of systems work (e.g. looking at single food product or wider systems level)

Deep Code failure at systems level. Language probel - use old world language which traps us. Also ‘property rights’ like to be challenged.

[Dark Matter iceberg graphic]

Building compound learning organisations and systems. Freedom and agency must lie in the actor for a system to be a system. That means learning. How do we build these?

Chief Learning Officers instead of CEOs. Coherence is formed not around risk but about capacity to learn. Higher overheads, but higher resilience and innovation capacities.

Crisis-driven system transitions. We’re going to live in a world where crises shatter Overton Windows. Emerging Theory of Change.

Big challenge is legitimacy. Mountains over mountains.

Single-point optimisation doesn’t work for an entangled planet. Need to focus on multi-point optimisation.

Multi-organisation organising. Contracting and coordination makes that difficult - what are the frameworks here?

Difficult for states to impose transition, needs to be negotiated.

System financing, structured economic systems, and para-colonizing financial capital. How do you move capital through a non-colonial lens? Capital is an extension of the dominion theory of the world.

‘System accelerators’

Intermediary agent-trust economy. How to build a different way to finance things. Turning energy meter into financial instrument? Public interest micro-trusts. Way of regulating the translation space. Weak signal.

Relationship with material economy - borrowing, not owning.

Freedom and systems - we need to build capacity for agents to be free (not in terms of market choice, but free in terms of being radically human). People and institutions feel trapped. Combined with volatility and uncertainty this creates fear.

First movers - food, material economy

We’re trying to make stuff circular that shouldn’t be. Biomaterial level? Needs to interact with nutrition system. Also river systems work (key fragility point)

Dark Matter Labs has new publication about portfolios - who owns them? New ways of organising to deal with portfolio allocation.

Problem of having the incumbents in the room when we’re talking about system transitions. 40% of the people who this issue will affect aren’t even born. Might be worth having empty seats to recognise this?

We don’t have data infrastructure - cities can’t calculate carbon emissions. Can’t just be ‘open data’ as requires security.

Operating in a deep war of values - e.g. billionaires willing to throw money at throttling the human race because they think this is the answer. Accelerating towards a ‘throttling event’. Very different perspectives on the table.

Our own governance - need integrity. Systemic question.

Financing the deep work - real issue, end up talking about surface level.

How do we move from communities of care based on fear (i.e. the far right) to communities of care based on love?

The future is off-grid solar

An electron heading towards a solar panel

I’ve read some of the thoughts in this post via Low-Tech Magazine especially around the DC/AC/DC conversion being pointlessly lossy. However, this is the first I’ve read of being able to use excess solar power to create other forms of energy.

[S]olar deployment is accelerating at breathtaking speed. Most of the world’s solar power was installed in the past 30 months. In fact, China installed more solar in 2023 than the US has installed in history.

In the UK in 2024, I can go online and buy a solar panel with the same dimensions as a fence panel, for only double the cost. In five years, the cost of solar will have halved again.

[…]

Solar will saturate the power grid, but that doesn’t mean that we’ll stop building it. It just means that we’ll use it off-grid.

[…]

The cost of solar energy in a sunny place is trending towards virtually-free.

This is solar’s opportunity to not just displace electricity supply, but also primary energy supply. Rather than simply supplying energy in the form it’s consumed (electricity), intermittent solar is so fricking cheap that it could manipulate atoms into fuels for subsequent consumption.

We’re talking about using solar to create synthetic kerosene for planes, clean ammonia for fertiliser, clean methanol for shipping, and maybe even synthetic natural gas for general purpose use.

These synthetic and ‘green’ fuels all rely on green hydrogen as a base ingredient. Green hydrogen is extraordinarily expensive to produce, and the only cost-competitive way to make it is off-grid solar.

[…]

Taking solar off the grid also has a few other major cost advantages. If you are ripping solar straight into a DC application, you can skip the costs and efficiency losses of inverting that power into AC. If you lose most of the balance of plant, power electronics, and the paperwork of a grid connection, you’re getting really cheap and fast.

Source: Ben James

How Bluey-Green Was My Valley?

The text 'Is my blue your blue?' on a blue background

After discovering this site earlier in the week, I’ve shown it to my wife and my mother. I’m interested in the results, because they’re the two people in my life that I’ve most disagreed with when it comes to the question “what colour is that?”

It turns out that, when it comes to the blue-green continuum, my wife isn’t so far away from me. But my mother? According to her, something isn’t “blue” unless it’s very blue. Fascinating from a phenomenology-in-practice point of view.

Source: ismy.blue

So far, so dystopian

Diagram explaining process of AI misleading questioning

Although there’s plenty of people who would say otherwise, I think we’re in an antediluvian period with LLMs. We’re not seeing ads inserted or intentional misinformation being spread through mainstream offerings.

It won’t be long, though, and weak signals like this give us a glimpse of the future.

This study examines the impact of AI on human false memories–recollections of events that did not occur or deviate from actual occurrences. It explores false memory induction through suggestive questioning in Human-AI interactions, simulating crime witness interviews. Four conditions were tested: control, survey-based, pre-scripted chatbot, and generative chatbot using a large language model (LLM). Participants (N=200) watched a crime video, then interacted with their assigned AI interviewer or survey, answering questions including five misleading ones. False memories were assessed immediately and after one week. Results show the generative chatbot condition significantly increased false memory formation, inducing over 3 times more immediate false memories than the control and 1.7 times more than the survey method. 36.4% of users' responses to the generative chatbot were misled through the interaction. After one week, the number of false memories induced by generative chatbots remained constant. However, confidence in these false memories remained higher than the control after one week. Moderating factors were explored: users who were less familiar with chatbots but more familiar with AI technology, and more interested in crime investigations, were more susceptible to false memories. These findings highlight the potential risks of using advanced AI in sensitive contexts, like police interviews, emphasizing the need for ethical considerations.

Source: MIT Media Lab

Because capitalism

Advert for Gillette razors

I enjoyed this rant that starts off talking about shaving being too expensive, and ends by giving examples of things that have replaced other things, and are worse.

(FWIW I’ve found that Bulldog razors seem to last a lot longer than other cartridge-based options)

So, does this matter? I assume that for most of the people reading this, the sums of money involved may seem pretty trivial. But I think the changes in the razor market are obviously bad, and reflect similar changes that can be seen in many other markets. We see new products launched which promise minor benefits in convenience, and which crowd out older, cheaper, and better products. Those older products are deliberately marginalised, and more money is captured from consumers without them really gaining any value from their expenditure.

[…]

Tea bags replace loose leaf tea. Allows for lower quality tea to be sold, diminishes the re-use of tea leaves. Also ludicrous product differentiation along the lines of ‘we have a special shaped tea bag’.

[…]

Subscription services like Hello Fresh, where you can pay well over the odds to have some vegetables delivered to you.

Source: John’s blog

There is an opportunity to...

Draw the rest of the fucking owl meme

I just saw that Tom Critchlow has taken a job, which is surprising given how much he waxed lyrical about the independent life. That post took me to one he wrote earlier this year about being useful rather than giving advice.

Giving advice starting with “you should…” is problematic, as I think we all come to learn through experience in both our personal and professional lives. It assumes you have all of the context, which is almost never true. Instead, pointing out “opportunities to…” is a much better framing.

Otherwise, as a consultant you’re telling them to do the very thing they don’t have the capacity to do. You’re telling them to draw the rest of the owl in the meme.

After all, it’s rare that a client doesn’t know have any clue what they need to do. Usually, in my experience at least, they need help choosing between options, and then capacity-building to get there.

Giving advice is an intensely personal thing. The feeling of learning something new sits right next to the feeling of shame for not knowing it in the first place. And worse, in the client/consultant relationship, the client is at least partially complicit in the situation when they come to you.

[…]

Giving advice is fraught even if the problem is well defined and you do know the answer. So when you’re working on strategic, ill-defined projects where there isn’t a right answer - giving advice is incredibly delicate, and in some cases not even possible.

So if you’re asking “You should…” to the client, stop and examine if you’ve properly defined the situation and provided evidence for the problem, to help the client deeply internalize the problem and win over the necessary stakeholders before you propose any kind of solution.

[…]

“There is an opportunity to…” This phrase is the key - it places the focus correctly on first defining the problem - and then providing evidence - before focusing on the solution. It allows us to articulate and quantify the opportunity while leaving room for the client to have say over resource allocation, for the client to shape the solution and for the client to determine prioritization and timing.

[…]

This all builds up to my personal consulting mantra: always work on the next most useful thing.

This mantra helps remind me that consulting isn’t about being right, it’s about being useful.

[…]

Always work on the next most useful thing. And that doesn’t always involve doing what the client asked for.

Source: Tom Critchlow

100 tips to sort your life out

Illustration of a man doing a 'plank' while the coffee is brewing

I was pretty amazed that Team Belshaw already does at least 75 of these 100 tips to sort your life out. Here are three that I personally don’t currently do, but which I might start doing.

  1. Carry ‘vex money’ Always carry enough cash to get you out of danger or trouble if other methods fail – a taxi fare at least.

[…]

  1. Try coffee planking “Every morning I get up and make coffee for my wife and me. One cup takes one minute 18 seconds to brew, and every morning for the last 12 months I have planked for this period. Simple thing, using the dead time.” Anonymous reader

[…]

  1. Keep track of praise and thanks Reader Sarah, who works as a teacher, keeps every thank-you card she has ever been given: “When I’ve had a rough day at school, I flick through them to remember some of the lives I have had an impact on.” Another reader, Lewis, saves positive messages about himself: “When times are tough or I’m feeling down, I dig through it and remind myself of the good things people have shared with me over the years.”

Source: The Guardian

Your name in LandSat

The word 'doug' spelled out using LandSat imagery

We have satellite imagery of pretty much every area of land on Earth. This is known as ‘LandSat’ and this website allows you to spell out your name, or any other word, using rivers and other geographical features!

Source: Camp LandSat

The importance of context

Artwork with WorkLife logo

I haven’t actually finished listening to the whole episode yet, but I can already highly recommend this conversation between Adam Grant and Trevor Noah.

The conversation they have about context towards the start is so important that I wish everyone I know would listen to it.

Trevor Noah is widely admired for his quick wit. He’s hosted The Daily Show and the Grammy Awards, sold out huge arenas around the world, had numerous hit comedy specials on Netflix, and published a bestselling memoir, Born a Crime. One of the keys to his success is his ability to read people and communicate clearly. In a lively discussion with Adam, Trevor dives into the importance of context in everything from personal relationships to global politics. The two also debate the best way to improve American politics — and Trevor does a few impromptu impressions, including one of Adam.

Source: WorkLife with Adam Grant

Quote posting done right?

Screenshot of quote post being detached from original

Although there are some positive use cases, one of the most toxic things about X/Twitter has been the ‘dogpiling’ that happens as a result of someone quote-posting something to their followers.

So much so, in fact, that Mastodon has long-resisted implementing them at all, although there are some workarounds in various Fediverse apps.

It’s fantastic to see, therefore, that Bluesky, a federated social network that runs on a different protocol to Mastodon, seems to have found a way to allow for non-toxic quote-posting.

(Since Elon Musk refused to comply with Brazilian law leading to X being blocked there, half a million new accounts have been created on Bluesky. Also, lots of people who I recognise from OG Twitter have started following me this week, which would suggest some form of tipping point…)

As of the latest app version, released today (version 1.90), users can view all the quote posts on a given post. Paired with that, you can detach your original post from someone’s quote post.

This helps you maintain control over a thread you started, ideally limiting dog-piling and other forms of harassment. On the other hand, quote posts are often used to correct misinformation too. To address this, we’re leaning into labeling services and hoping to integrate a Community Notes-like feature in the future.

Source: Bluesky blog

A typology of meme-sharing

Kamala Harris 'coconut tree' meme

I don’t know about you, but responding to a family member, friend, or professional contact using a meme has been a daily event for a long time now. It’s now over 12 years since I gave my meme-laden talk at TEDx Warwick based on my doctoral thesis. A year later, I gave a presentation (in the midst of growing a beard for charity) which used nothing but gifs. But I digress.

This article from New Public gives a typology of meme-sharing, which is useful. One of the things I wish I had realised, because looking back with hindsight it’s so obvious, is the way that memes can be weaponised to create in-groups and out-groups, and to perpetuate hate. Not that I could have done much about it.

There are at least three types of connections that can be forged through meme-sharing: bonding over a shared interest such as movies, sports, and more; bonding over an experience or circumstance; or bonding over a feeling or personal sentiment.

[…]

Sharing memes to connect over common interests is perhaps the most surface-level form of meme-sharing. It hinges exclusively on having shared cultural references rather than shared personal commonalities. These exchanges are more likely to occur in established relationships, such as among family and friends that have shared lived experiences and therefore are exposed to the same cultural references and social cues.

[…]

Connecting over a shared interest can be like connecting over a single data point. But people are so much more complicated. That is why connecting over experiences, which are often inherently more rich and embedded with memories and emotion, can yield a more powerful connection.

[…]

Connecting over shared feelings can be even more moving. There is something particularly intimate about connecting over emotions, and at the same time, universal. As humans, we are rarely self-aware of all of our internal thoughts and feelings, so a meme that can connect with them unexpectedly, like the one below (sound on!), can be powerful.

[…]

These three ways of forming connection through meme-sharing are of course not mutually exclusive, and they are far from being collectively exhaustive. There are definitely instances of meme-sharing which accomplish all of these

And there can also be situations in which people share memes for reasons outside of connecting over identity, experience, or feelings. Rather, what this typology illustrates is the ways that we can (and do!) cultivate belonging with others online through the sharing of comedic imagery.

Source: New Public

Image: Know Your Meme

Fediverse governance models

Abstract geometric structure with metallic rods, wooden elements, green tubes, and moss-like textures, against a light gradient background.

Erin Kissane and Darius Kazemi have published a report on Fediverse governance which is the kind of thing I would have read with relish when I was Product Manager of MoodleNet. And even before then when I was presenting on decentralisation and censorship in the midst of the ‘illegal’ Catalan independence referendum.

These days, while still interested in this kind of stuff, and in particular in [how misinformation might be countered in decentralised networks]9bonfirenetworks.org/posts/zap…) I’m not going to be reading 40,000 words on the subject (PDF). Instead point others to it, and in particular to six-page ‘quick start’ guide for those who might be new to the idea of federated governance.

I wouldn’t have guessed, going in, that we’d end up with the major structural categories we landed on—moderation, server leadership, and federated diplomacy—but after spending so much time eyeball-deep in interview transcripts, I think it’s a pretty reasonable structure for discussing the big picture of governance. (The real gold is of course in the excerpts and summaries from our participants, who continuously challenged and surprised us.)

There are no manifestos to be found here, except in that our participants often eloquently and sometimes passionately express their hopes for the fediverse. There are a lot of assumptions, most of which we’ve tried to be pretty scrupulous about calling out in the text, but anything this chunky contains plentiful grist for both principled disagreement and the other kind. Our aim is to describe and convey the knowledge inherent in fediverse server teams, so we’ve really stuck close to the kinds of problems, risks, needs, and challenges those folks expressed.

Source: Erin Kissane

Image: Google Deepmind

Life-ready signals

Black background with stylised white-outlined hand pointing to the left

To be a professional, a knowledge worker in the 21st century, means keeping up with jargon, acronyms, and shifts in terminology. Some of this is necessary, as I’ve explained in my work on ambiguity, some isn’t.

This article by Kristine Chompff on the Edalex blog introduces a term new to me: “life-ready signals”. It doesn’t seem to me destined to catch-on, any more than ‘durable skills’ has or will, but is nevertheless a worthy attempt to recognise the behaviours that go around hard skills and knowledge.

I also think that we need to do something about the acronym soup: while I might understand someone saying that we use RSDs to build a VC as part of a learner’s PER within an LER ecosystem, it’s gobbledegook to everyone else.

For anyone interested in this kind of thing, we have a community of practice called Open Recognition is for Everybody (ORE) which you can discover and join at badges.community)

For us to understand life-ready signals, we must for a second talk about semiotics and the definition of terms. Because the term “life-ready skills” has evolved, so has the term “life-ready signals.”

Semiotics is the study of signs and symbols, of which language is a part. It depends partly on the object being described, but also on the way the person reading that description interprets it. For these terms to be meaningful, we all need to interpret them in the same way.

Life-ready skills are the thing being described. Life-ready signals are those “signs” being used to describe them. For a learner to tell their own story, they need to be equipped not only with the skills themselves, but the proper “signs” to share them with others in a meaningful way.

It’s also important to note here that with the rise of generative artificial intelligence (GenAI) there will always be skills that machines will never master, and those are the life-ready skills we are discussing here.

Source: Edalex blog

Image: Giulia May

Begetting strangers

Image showing baby emerging from an 'egg' made up of a human head/face

This is such a great article by Joshua Rothman in The New Yorker. Quoting philosophers, he concisely summarises the difficulty of parenting, examines some of the tensions, and settles on a position with which I’d agree.

It’s such a hard thing to do, especially with your first child, that I’m amazed there’s not some kind of mandatory classes. The hardest bit isn’t even the dealing with a new helpless infant, but the changes that kids go through on the road to adulthood. While we all went through them from the inside, trying to understand and help from the outside (while dealing with your own issues) is so difficult.

The fact that children are their own people can come as a surprise to parents. This is partly because young kids are so hopelessly dependent, but it also reflects how we think about parenthood… We talk as though having children is mainly “a matter of inclination, of personal desire, of appetite,” the philosopher Mara van der Lugt writes, in “Begetting: What Does It Mean to Create a Child?” She sees this as totally backward… Having children, van der Lugt argues, might be best seen as “a cosmic intervention, something great, and wondrous—and terrible.” We are deciding “that life is worth living on behalf of a person who cannot be consulted,” and we “must be prepared, at any point, to be held accountable for their creation.”

[…]

Van der Lugt is not pronatalist, but she isn’t anti-natalist, either. Her contention is simply that we should confront these questions more directly. Typically, she observes, it’s people who don’t want kids who are asked to explain themselves. Maybe it should work the other way, so that, when someone says that they want kids, people ask, “Why?”

[…]

In a 2014 book, “Family Values: The Ethics of Parent-Child Relationships,” the philosopher Harry Brighouse and the political theorist Adam Swift ask how we might relate to our children if we understand them, from the beginning of their lives, as independent individuals. There’s a tension, they write, between the ideals of a liberal society and the widely held “proprietarian view” of children: “The idea that children in some sense belong to their parents continues to influence many who reject the once-common view that wives belong to their husbands,” they note. But what’s the alternative? What would a family look like if the fundamental separateness of children was taken for granted, even during the years when they depend on us the most?

[…]

If the relationship between parents and children is based not on the proprietary “ownership” of kids by their parents but on the right of children to a certain kind of upbringing, then it makes sense to ask what parents must do to satisfy that right—and, conversely, what’s irrelevant to satisfying it. Brighouse and Swift, after pushing and prodding their ideas in various ways, conclude that their version of the family is a little less dynastic than usual. Some people, for instance, think that parents are entitled to do everything they can to give their children advantages in life. But, as the authors see it, some ways of seeking to advantage your children—from leaving them inheritances to paying for élite schooling—are not part of the bundle of “familial relationship goods” to which kids have a right; in fact, confusing these transactional acts for those goods—love, presence, moral tutelage, and so on—would be a mistake. This isn’t to say that parents mustn’t give their kids huge inheritances or send them to private schools. But it is to say that, if the government decides to raise the inheritance tax, it isn’t interfering with some sacred parental right.

[…]

“The basic point is simple,” they write. “Children are separate people, with their own lives to lead, and the right to make, and act on, their own judgments about how they are to live those lives. They are not the property of their parents.”

Source: The New Yorker

(use Archive Buttons if you can’t get access}

The thorny problem of authorship in a world of AI

Code on a computer screen

This is an interesting article by Justine Tunney who argues that Open Source developers are having their contributions erased from history by LLMs. It’s interesting to consider this by field, as LLMs seem to have no problem explaining accurately what I’m known for (digital literacies, etc.)

As Tunney points out, the world of Open Source is a gift economy. But if we’re gifting things to something ingesting everything indiscriminately and then regurgitating in a way that erases authorship, is that problematic?

In a world of infinite automation and infinite surveillance, survival is going to depend on being the least boring person. Over my career I’ve written and attached my name to thousands of public source code files. I know they are being scraped from the web and used to train AIs. But if I ask something like Claude, “what sort of code has Justine Tunney wrote?” it hasn’t got the faintest idea. Instead it thinks I’m a political activist, since it feels no guilt remembering that I attended a protest on Wall Street 13 years ago. But all of the positive things I’ve contributed to society? Gifts I took risks and made great personal sacrifices to give? It’d be the same as if I sat my hands.

I suspect what happens is the people who train AI models treat open source authorship information as PII [Personally Identifiable Information]. When assembling their datasets, there are many programs you can find on GitHub for doing this, such as presidio which is a tool made by Microsoft to scrub knowledge of people from the data they collect. So when AIs are trained on my code, they don’t consider my git metadata, they don’t consider my copyright comments; they just want the wisdom and alpha my code contains, and not the story of the people who wrote it. When the World Wide Web was first introduced to the public in the 90’s, consumers primarily used it for porn, and while things have changed, the collective mindset and policymaking are still stuck in that era. Tech companies do such a great job protecting privacy that they’ll erase us from the book of life in the process.

Is this the future we want? Imagine if Isaac Newton’s name was erased, but the calculus textbooks remained. If we dehumanize knowledge in this manner, then we risk breaking one of the fundamental pillars that’s enabled science and technology to work in our society these last 500 years. I’ve yet to meet a scientist, aside from maybe Satoshi Nakamoto, who prefers to publish papers anonymously. I’m not sure if I would have gotten into coding when I was a child if I couldn’t have role models like Linus Torvalds to respect. He helped me get where I am today, breathing vibrant life into the digital form of a new kind of child. So if these AIs like Claude are learning from my code, then what I want is for Claude to know and remember that I helped it. This is actually required by the ISC license.

Source: justine’s web page

Image: Marcus Spiske

Government and algorithmic bias

If any government is going of, by, and for the people, then we can’t have unaccountable black box algorithms making important decisions. This is a welcome move.

Source: The Guardian

Artificial intelligence and algorithmic tools used by central government are to be published on a public register after warnings they can contain “entrenched” racism and bias.

Officials confirmed this weekend that tools challenged by campaigners over alleged secrecy and a risk of bias will be named shortly. The technology has been used for a range of purposes, from trying to detect sham marriages to rooting out fraud and error in benefit claims.

[…]

In August 2020, the Home Office agreed to stop using a computer algorithm to help sort visa applications after it was claimed it contained “entrenched racism and bias”. Officials suspended the algorithm after a legal challenge by the Joint Council for the Welfare of Immigrants and the digital rights group Foxglove.

It was claimed by Foxglove that some nationalities were automatically given a “red” traffic-light risk score, and those people were more likely to be denied a visa. It said the process amounted to racial discrimination.

[…]

Departments are likely to face further calls to reveal more details on how their AI systems work and the measures taken to reduce the risk of bias. The DWP is using AI to detect potential fraud in advance claims for universal credit, and has more in development to detect fraud in other areas.

UK Visas & Immigration sign

'Meta-work' is how we get past all the one-size-fits-none approaches

Documents and pipes arranged to suggest a workflow

Alexandra Samuel points out in this newsletter that a lot of the work we do as knowledge workers will increasingly be ‘meta-work’. Introducing a 7-step approach, she first of all outlines why it’s necessary, especially in a ‘neurovarious’ world.

I think this is a really important article, and hits the sweet spot between AI literacy, systems thinking, and working openly. One to bookmark, for sure.

In the AI era, knowledge production will increasingly get done by machines—which means that the meta-work of choosing tools and processes is not just the work that remains for humans, but the most valuable kind of work you can do. Meta-work is how we get past all the one-size-fits-none approaches that have cursed us with overload and overwhelm, because we’re trying to work in a way that doesn’t don’t account for the vast differences in how each of us thinks, perceives, and communicates.

When we get overwhelmed by our tasks or stuck in our writing or thinking, it is often because we need to do some meta-work.  

[…]

The more deeply I dive into the world of neurovariety—the functional differences in how workers think, perceive and communicate—the more I see that effective meta-work depends on understanding your own particular thinking, perception and communication style.

Meta-work requires you to think about how you build and create knowledge, to consider where you truly add value to your organization or to the world, and to recognize that there is no right answer to any of these questions—just the closest answer you can find for yourself, right now.

Source: Thrive at Work

Reimagining misinformation

Photo modified by staff of 'The Verge' to add a car/bicycle accident

Google’s new Pixel 9 smartphones are being heavily marketed as having their AI tool, Gemini , onboard. One of the things this allows you to do is to use a tool called ‘Reimagine’ that allows you to add new things to a scene simply via a text prompt.

It’s getting easier and easier to create realistic versions of events which never really happened. In the example above, they circumvented the cursory safeguards to simulate an accident. Fun times.

Reimagine is a logical extension of last year’s Magic Editor tools, which let you select and erase parts of a scene or change the sky to look like a sunset. It was nothing shocking. But Reimagine doesn’t just take it a step further — it kicks the whole door down. You can select any nonhuman object or portion of a scene and type in a text prompt to generate something in that space. The results are often very convincing and even uncanny. The lighting, shadows, and perspective usually match the original photo. You can add fun stuff, sure, like wildflowers or rainbows or whatever. But that’s not the problem.

A couple of my colleagues helped me test the boundaries of Reimagine with their Pixel 9 and 9 Pro review units, and we got it to generate some very disturbing things. Some of this required some creative prompting to work around the obvious guardrails; if you choose your words carefully, you can get it to create a reasonably convincing body under a blood-stained sheet.

In our week of testing, we added car wrecks, smoking bombs in public places, sheets that appear to cover bloody corpses, and drug paraphernalia to images. That seems bad. As a reminder, this isn’t some piece of specialized software we went out of our way to use — it’s all built into a phone that my dad could walk into Verizon and buy.

Source: The Verge

Where in the world is that shadow?

My son enjoys playing GeoGuessr, which is “a geography game, in which you are dropped somewhere in the world in a street view panorama and your mission is to find clues and guess your location on the world map”.

Some people are incredibly good at it, and can identify places within seconds. They use clues such as shadows, streetlights, and even the colour of soil or sand.

Bellingcat, an investigative journalism group specialising in “fact-checking and open-source intelligence” has released a tool to help figure out the location of images or video for more serious purposes. This is particularly important in a world of misinformation.

Geolocation is often a time-consuming task.

Researchers often spend hours poring over photos, scouring satellite images and sifting through street view.

But what if there was another way to quickly narrow down your search area?

Bellingcat’s new Shadow Finder Tool, developed with our Discord community, helps you quickly narrow down where an image was taken, by reducing your search area from the entire globe to just a handful of countries and locations.

Source: Bellingcat

Tool: Shadow Finder

Dark data is a climate concern

Nyan Cat

I mean, yes, of course I knew that data files are stored on servers and that those servers consume electricity. But this is a good example of reframing. How many emails have I got stored that I will never look at again? How many files stored in the cloud ‘just in case’?

Multiply that by millions (and billions) of internet users and we’ve got… a climate-relevant issue.

When “I can has cheezburger?” became one of the first internet memes to blow our minds, it’s unlikely that anyone worried about how much energy it would use up.

But research has now found that the vast majority of data stored in the cloud is “dark data”, meaning it is used once then never visited again. That means that all the memes and jokes and films that we love to share with friends and family – from “All your base are belong to us”, through Ryan Gosling saying “Hey Girl”, to Tim Walz with a piglet – are out there somewhere, sitting in a datacentre, using up energy. By 2030, the National Grid anticipates that datacentres will account for just under 6% of the UK’s total electricity consumption, so tackling junk data is an important part of tackling the climate crisis.

[…]

One funny meme isn’t going to destroy the planet, of course, but the millions stored, unused, in people’s camera rolls does have an impact, he explained: “The one picture isn’t going to make a drastic impact. But of course, if you maybe go into your own phone and you look at all the legacy pictures that you have, cumulatively, that creates quite a big impression in terms of energy consumption.”

Cloud operators and tech companies have a financial incentive to stop people from deleting junk data, as the more data that is stored, the more people pay to use their systems. “There are maybe other big contributors to [greenhouse gas] emissions, which maybe haven’t been picked up. And we would certainly argue that data is one of those and it will grow and get bigger, particularly think about that huge explosion but also, we know through forecasts that in the next year to two, if we take all the renewable energy in the world, that wouldn’t be enough to accommodate the amount of energy data requires. So that’s quite a scary thought.”

Source: The Guardian

Image: Nyan Cat

There is no such thing as a life that makes sense

Doormat covered in leaves saying THE GOOD KIND OF WEIRD LIVES HERE

I definitely agree with the author of this post that there a couple of wonderful things about reading history. First, you realise that almost everyone in the past had it much harder than you do, which puts things in perspective. Second, you realise that there’s many and varied ways to live a happy and/or flourishing life.

In addition, the passing comment about credentials not mattering when people realise you’re obsessive enough about a certain area is probably an insight worth unpacking.

Most of my friends have life paths that go something like this: they got ruinously obsessed with something to the exclusion of everything else and then worked on it. And eventually that failed or succeeded and then they got ruinously obsessed with something else and started working on that. And it turns out that if you’re obsessive enough the credentials thing sort of goes away because people are just like, oh, you’re clearly competent and bizarrely knowledgeable about this thing you’re obsessed with, I want to help you work on it.

If you operate like this way you end up with a weird life because in a conventional career path there are all these rules and customs you’re supposed to follow, like you’re supposed to major in W in undergrad and get X internship and then go to Y for grad school and then work at Z. The truth is, most of the people I know are just too ADHD or impatient or unconventional to follow the path that’s expected of them. They may not have even been aware of what the “normal” thing to do was. And I’m certainly not recommending that or glamorizing it because rules and customs exist for a reason, they are necessarily useful. But it’s helpful to know that some people end up fine even when they don’t do the normal thing.

Something I wish someone had told me as a kid is that the only real “rule” for work is that you have to be able pay your rent and not hurt anyone and not break any laws. And within those confines you can do literally anything, hopefully something you find personally fulfilling.

[…]

Reading history is useful partially because it makes you understand how varied people’s lives really are. The artists I admire have had lives that included nervous breakdowns and fleeing countries because of war and leaving their wife in another continent and writing their first novel to pay off gambling debts. That helps me remember that there is no such thing as a life that makes sense, or at least that’s not something I need to aspire to.

Source: bookbear express

Image: Derick McKinney

Google Calendar illustration trigger words

Example of Google Calendar illustrations

​If you use the Google Calendar app in ‘schedule’ view, you’ll no doubt be familiar with the automatic illustrations added for some events. While looking for a way to stop it showing an American Football instead of a ‘soccer’ ball, I came across a list of all of the different kinds of ways you can trigger the illustrations.

Those illustrations are triggered by the presence of certain codewords within your event titles. And once you know what codewords cause what illustrations to appear, you can hack the system, in a sense, and make any event in your agenda stand out with a specific illustration around it.​ ​

​Source: The Intelligence

(someone’s also created a GitHub repo)​

You get water from food as well, you know

Person drinking water

It’s always puzzled me when people drink huge amounts of water. Whether it’s for ‘detox’ reasons, as part of a diet, or something else, it always seems to be tinged with a bit of moral showboating.

I do a fair amount of exercise. I drink water with some BCAA powder in it when I do. Other than that, I have a couple of cups of tea a day and water with my meals. Turns out, this is probably the right approach.

It is a common belief that you have to drink 6-8 glasses of water per day. Almost everyone has heard this recommendation at some point although if you were to ask someone why you need to drink this much water every day, they probably wouldn’t be able to tell you. There is usually some vague idea that you need to drink water to flush toxins out of your system. Perhaps someone will suggest that drinking water is good for your kidneys since they filter the blood and regulate water balance. Unfortunately, none of these ideas is quite true and the 6-8 glasses myth comes from a fundamental misunderstanding of some basic physiology.

[…]

You will always lose water vapour in your breath, provided you keep breathing, and you will always produce… watery-odour free sweat even if you move to the Arctic. Of course, if you move to the tropics you will produce much more sweat to compensate for the extra heat. But all told, roughly 1.5-2 litres of water loss are obligatory losses that we cannot do anything about. Those who exercise, live in hot climates or have a fever will obviously lose more water because of more sweating. Thus, a human being needs to replenish the roughly 2 litres of water they lose every day from sweating, breathing, and urination. The actual notion of 8 glasses a day originates from a 1945 US Food and Nutrition Board which recommended 2.5 litres of daily water intake. But what is generally forgotten from this recommendation is, firstly, that it was not based on any research and that secondly the recommendation stated that most of the water intake could come from food sources.

All food has some water in it, although obviously fresh juicy fruits will have more than, say, a box of raisins. Suffice it to say that by eating regular food and having coffee, juice or what have you, you will end up consuming 2 litres of water without having to go seek it out specifically. If you find yourself in a water deficit, your body has a very simple mechanism for letting you know. Put simply, you will get thirsty.

If you are thirsty, drink water. If you are not thirsty, then you do not need to go out and purposefully drink 6-8 glasses of water a day since you will probably get all the water in your regular diet. One important caveat to remember though is that on hot summer days, your water losses from sweating go up and if you plan to spend some time out doors, having water with you is important to avoid dehydration and heat stroke. While the thirst reflex is pretty reliable, it does tend to fade with age and older people are more likely to become dehydrated without realizing it. Thus, the take home message is drink water when you are thirsty, but on very hot days it might not be a bad idea to stay ahead of the curve and keep hydrated.

Source: McGill University Office for Science and Society

Tugging at metaphors

Statue of dog tugging at a rope

Christina Hendricks is a Professor of Teaching in Philosophy at the University of British Columbia-Vancouver. In this post she reflects on a session run by fellow Canadian and open educator, Dave Cormier, in which he discussed ‘messy’ situations where we’re not sure what should be done.

The solution suggested seems to be to ‘tug’ things in a particular direction based on your values. I’d argue for a different, more systemic approach, given what I’ve learned so far through my MSc. What you need when confronted with a messy, problematic situation are boundaries, holistic thinking, and multiple perspectives.

I really appreciated where Dave landed in his presentation: rather than only feeling stuck, suspended, we can consult our values and make a move based on those, we can tug the rope in a tug of war in the direction of our values and work to move things from there. The focus on values is key here: ask yourself what are your values as they relate to this situation, and make decisions and act based on those, knowing that’s enough in uncertain situations. Which doesn’t mean, of course, that you can’t revisit your values and how they apply to the situation if either of those things changes, but that it’s a landing place and it’s solid enough for the moment. He talked about how we can have conversations with students and others about why we would do something in a particular situation, rather than what the right answer is, focusing on the values that are moving us.

To do so requires that we are clear about what our values are, which is in some cases more easily said than done. This is something near and dear to my heart as a philosopher, as trying to distill what is underlying our views and our decisions, what kinds of reasons and values, is part of our bread and butter.

[…]

[W]hat if we thought about complex issues and structures more like flexible webs? (Which is an image that reminds me of other of Dave Cormier’s work such as that on rhizomatic learning.) So that if you tug on one part it can still move and the other parts will move as well (or break I suppose, which in some cases may not be a bad thing).

Source: You’re The Teacher

Image: David Ellis (CC BY-NC-ND)

You don't need permission, you need advice

Pedestrian crossing signal showing green person

Deciding that you want to do something and then asking for advice is different to asking for permission. In general, permission-seeking behaviour in adults is a sign of weakness, even in hierarchical organisations. It’s either a sign of personal weakness, or if there are consequences for acting with authority in your domain of influence, then it’s a sign of organisational weakness.

One of the most common anti-patterns I see that can create conflict in an otherwise collaborative environment is people asking for permission instead of advice. This is such an insidious practice that it not only sounds reasonable, it actually sounds like the right thing to do: “Hey, I was thinking about doing X, would you be on board with that?”

Advice… is easy. “Hey, I was thinking about doing X, what advice would you give me on that?” In this instance you are showing a lot of respect to the person you are asking but not saddling them with responsibility because the decision is still on you. Your obvious goal with this approach is to do the best you can, so they are going to trust you aren’t hiding any gritty details and therefore aren’t going to waste time second guessing your premises. They are going to feel comfortable giving you all their honest feedback knowing the responsibility lies with you, and your ego will remain intact because you invited the criticism on yourself directly.

Source: boz

Image: Mark König

Give readers a break

Illustration of a book, hand, pen, and tape measure

I recently struggled with the middle of a book which I really wanted to finish by an author I really like. The chapters were too long for its subject matter, and I gave up.

Contrast that with The Road which is quite the harrowing read at times but, as this article points out, doesn’t have any chapters at all. I completed that without any problem. Other books, like the Reacher series, have quite short chapters. The trick, it seems, is to have chapter, or at least some kind of gaps to give readers a break, at times which are appropriate.

With our phones offering us immediate dopamine, books now have to work harder to keep us engaged. ‘Busy-ness’ has become an increasing distraction, through work and parenting as well as social media. That’s why you may have noticed shorter chapters in more recent books, especially ones aimed at readers of millennial age and below (that’s pretty much everyone under forty).

As any writer will find, however, there is no magic button when it comes to chapter length: the ‘right’ one is a blend for each novel being written. There’s no point in worrying about the length of your piece of string if the string itself isn’t useful or compelling.

Source: Penguin

14kB

Close up of green leaf

It’s been four years since I switched to the Susty theme for my WordPress-powered blog. Not long later, I also redesigned my home page to be less than 1kB (although it’s slightly more than that now).

Micro.blog, which I use to host Thought Shrapnel is terrible in this regard. Using Cloudflare’s URL Scan gave a ‘bytes transferred’ total of 12.24MB, which is 3,000 times larger than the 4.15kB for my home page, and 14 times larger than the 891.28kB (including images) for my WordPress-powered blog.

Minimising the size of your site is is not only a good idea from a sustainability point of view, but having a fast-loading website is just better for user experience and SEO. The extract below explains why having a site that is less than 14KB (compressed) is a good idea from a technical perspective.

Most web servers TCP slow start algorithm starts by sending 10 TCP packets.

The maximum size of a TCP packet is 1500 bytes.

This maximum is not set by the TCP specification, it comes from the ethernet standard

Each TCP packet uses 40 bytes in its header — 16 bytes for IP and an additional 24 bytes for TCP

That leaves 1460 bytes per TCP packet. 10 x 1460 = 14600 bytes or roughly 14kB!

So if you can fit your website — or the critical parts of it — into 14kB, you can save visitors a lot of time — the time it takes for one round trip between them and your website’s server.

Source: endtimes.dev

Image: Markus Spiske

Doing things that don’t scale in pursuit of things that can’t scale

Note: I’ve been away from here for just over a month, and my backlog is so huge that I can’t put off posting any longer!

Illustration of knitting (hands, needles, wool)

I’ve said many times over the last few years to friends and family that I’ve achieved all that I want to in life. That, I think, makes it easier to ‘pursue things that don’t scale’ — but so does studying philosophy from my teenage years onwards.

This post talks about “doing things that don’t scale in pursuit of things that can’t scale” which is a great way of saying doing things that are human-scale. One of the examples given in this post is knitting, which cited in an article in The Guardian as being an example of the kinds of arts and crafts that promote wellbeing.

To some extent, of course, all of this is borne of maturity, of life experience, and of approaching and then reaching middle-age.

Chasing scale seems to be a kind of early life affliction. The more you chase it, the bigger the thing you chase gets. Perhaps it’s a natural desire to see how important we can be or at least how important our creations can be to the world (and hence how important we can be by proxy …). A desire to take on a seemingly insurmountable challenge, perhaps a noble one (though not always), and see if we can conquer it.

Yet without limits, we try to find them. This is true on many levels, whether it’s about how big we want our creations to become or how people should be able to lead their personal lives or how much candy kids can eat after a Halloween haul. But I think having no limits is unnatural. Chasing scale to the level we do is too. Whether we succeed or not, it stresses the system and inevitably burns us out.

Then a new motivation seems to surface, a desire to pursue something that can’t scale. See, my theory is that chasing things that scale makes you need therapy, and the therapy is pursuing things that can’t scale. The antidote to burnout and the existential inquiry it brings seems to be doing things that don’t scale in pursuit of things that can’t scale. It becomes exciting not to see what you can do without limits, but to see what you can do with them.

What are these pursuits that can’t scale? They could be skills, like archery or chess or cooking. They could be close relationships, like making friends. Maybe it’s building a truckload of IKEA furniture. Or maybe it’s starting a local small business. These pursuits could be considered hobbies or something more serious. It doesn’t matter so much what it is than that it has a clear and visible ceiling.

Source: Working Theorys

Stand up for yourself. Challenge authority. Tell your rude co-worker to shut up.

Office environment with magnifying glass over one desk

I can’t say I’ve ever read Roxanne Gay’s Work Friend column for The New York Times during the last four years, but I enjoyed reading her sign-off article. She talks about the advice she really wanted to give people (usually “quit your job”) and the things that we really want, but will never be able to get, from a job.

To work, for so many of us, is to want, want, want. To want to be happy at work. To feel useful and respected. To grow professionally and fulfill your ambitions. To be recognized as leaders. To be able to share what you believe with the people you’re around for eight or more hours a day. To be loyal and hope your employers will reciprocate. To be compensated fairly. To take time off to recharge and enjoy the fruits of your labor. To conquer the world. To do a good enough job and coast through middle age to retirement.

[…]

We shouldn’t have to suffer or work several jobs or tolerate intolerable conditions just to eke out a living, but a great many of us do just that. We feel trapped and helpless and sometimes desperate. We tolerate the intolerable because there is no choice. We ask questions for which we already know the answers because change is terrifying and we can’t really afford to risk the loss of income when rent is due and health insurance is tied to employment and someday we will have to stop working and will still have financial obligations.

I was mindful of these realities as I answered your Work Friend questions. Still, in my heart of hearts, I always wanted to tell you to quit your job. Negotiate for the salary you deserve. Stand up for yourself. Challenge authority. Tell your rude co-worker to shut up. Report your boss to everyone and anyone who will listen. Consult a lawyer. Did I mention quit your job? Go back to graduate school. Leave some deodorant and mouthwash on your smelly co-worker’s desk. Send that angry email to your undermining colleague. Call out your boss when he makes a wildly inappropriate comment. No, your boss should not force you to work out of her kitchen. Mind your own business about your colleague’s weird hobby. Mind your own business, in general. Blow the damn whistle on your employer’s cutting corners and putting people’s lives in danger. Tell the irresponsible dog owner to learn how to properly care for the dog. No, you don’t owe your employer anything beyond doing your job well in exchange for compensation. No, your company is not your family. No, the job will never, ever love you.

This is all to say that I wish we lived in a world where I could offer you frank, unfiltered professional advice, but I know we do not live in such a world.

Source: Goodbye, Work Friends

(use Archive Buttons if you can’t access directly)

You don't have to like what other people like, or do what other people do

Orange text on yellow background: 'Why don't you just switch off your television set and go and do something less boring instead...?'

Warren Ellis responds to a post by Jay Springett on ‘surface flatness’ by reframing the problem as… not one we have to worry about. It’s good advice: so long as you can sustain an income by not having to interact with online walled gardens, why care what other people do?

(I’m saying this slightly hesitantly, as I often do hand-wringing about the amount of disinformation on mainstream social networks and chat apps, but there’s not much I personally can do about it)

If you treat all those internet platforms as television, then you can turn them off and go for a walk on the internet instead. Structured hypertext products are still walled gardens, they’re just themed gardens, like a physic garden. The thing about walled gardens is that most people like them. They’re easy.

Jay has a point about the big platforms generally deprecating a lot of hypertext functions, as they lead people out of the walled gardens. But people like walled gardens. Even if they’re full of toxic plants, stinking blooms and corpse flowers. And besides: you have no more hope of imposing a new way of doing things on the internet than of preventing the BBC from commissioning any new programme with Michael McIntyre in it.

Leave ’em to it. Network tv isn’t all of broadcast culture, just as the big platforms aren’t all of internet culture, and all that shit is still hyperlinked. Leave the platforms to it. Go for a walk and report your notes.

Source: Warren Ellis

Summer digital detox

iPad, coffee, and leaf on a white surface

During my run yesterday morning, I listened to a great podcast episode about doing different things during the summer months. Now, as I wait to pick up my daughter from school in the driving rain it might not feel like summer here in the UK, but the advice is nonetheless spot-on.

In particular, I’ve taken the advice to do a bit of a digital detox and slow down a bit. So I’ve logged out of my Mastodon, Bluesky, and LinkedIn accounts, and will be back… mañana.

The summer months have a different flavour and feel to the other months of the year; there’s something different about our energy, motivation and willpower. And, if we can harness those differences, we have a golden opportunity to make meaningful changes that can have a transformative impact on our health, happiness and relationships and teach us things about ourselves that we previously did not know.

Source: Feel Better Live More

Image: Leone Venter

Informatics of domination

Part of the Calculating Empires map

I’ve had this incredible interactive map, created by Kate Crawford and Vladan Joler, bookmarked for a while now. I’m never sure what to do with so much information in one place that isn’t primarily text-based.

I’m sharing it while still exploring it myself, with the hope that others will be able to find a use for it rather than be overwhelmed!

Calculating Empires is a large-scale research visualization exploring how technical and social structures co-evolved over five centuries. The aim is to view the contemporary period in a longer trajectory of ideas, devices, infrastructures, and systems of power. It traces technological patterns of colonialism, militarization, automation, and enclosure since 1500 to show how these forces still subjugate and how they might be unwound. By tracking these imperial pathways, Calculating Empires offers a means of seeing our technological present in a deeper historical context. And by investigating how past empires have calculated, we can see how they created the conditions of empire today.

[…]

Calculating Empires takes Donna Haraway’s provocation literally that we need to map the “informatics of domination.” The technologies of today are the latest manifestations of a long line of entangled systems of knowledge and control. This is the purpose of our visual genealogy: to show the complex interplay of systems of power, information, and circumstance across terrain and time, in order to imagine how things could be otherwise.

This work can never be complete: it is necessarily partial, subjective, and drawn from our own positionality. But that openness is part of the project. You are invited to read, reflect, and consider your own history in the recurring stories of calculation and empire. As the overwhelming now continues to unfold, Calculating Empires offers the possibility of looking back, in order to consider how different futures could be envisioned and realized.

Source: Calculating Empires: A Genealogy of Technology and Power Since 1500

If you're not a part of the solution, there's good money to be made in prolonging the problem

Open-plan office with wooden tables, string lights, and people working at computers.

There’s a lot of money sloshing around at the top of society, being channeled into different schemes and offshore bank accounts. To enable this, there are a lot of bullshit jobs, including PR agencies spewing out credulous content.

Joan Westenberg was one of these people, until one day, she decided not to be. As she quotes Upton Sinclair as saying, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”

One morning, I sat down at my desk to craft yet another press release touting yet another “game-changing” startup that had raised - yet another - $25 million. And I realized I couldn’t remember the last time I’d written something I believed in. The words that used to flow felt like trying to squeeze ancient toothpaste from an empty tube.

That was the day I cracked.

It wasn’t about the individual startups or the overhyped products. It was the whole damn ecosystem—if we can call it that. The inflated valuations, cult-like frat house “culture,” and the relentless, mindless pursuit of growth that comfortably glossed over the human cost of “disruption.”

Somewhere along the way, I’d allowed my writing—the thing that used to give me purpose—to be co-opted by the bullshit industrial complex. I’d convinced myself that I was part of something bigger, something world-changing. But deep down, in the quiet moments between pitch meetings and product launches, I knew better.

Source: Joan Westenberg

Look out for surplus fingers

Montage of still from AI-generated (or modified) videos with the word 'FAKE' over the top

As I always say about misinformation and disinformation: people believe what they want to believe. So you don’t actually need very sophisticated ‘deepfakes’ for people to reshare fake content.

That being said, this article in The Guardian does a decent job of showing some ways of spotting deepfake content, along with some examples.

Look out for surplus fingers, compare mannerisms with real recordings and apply good old-fashioned common sense and scepticism, experts advise

In a crucial election year for the world, with the UK, US and France among the countries going to the polls, disinformation is swirling around social media.

There is much concern about deepfakes, or artificial intelligence-generated images or audio of leading political figures designed to mislead voters, and whether they will affect results.

They have not been a huge feature of the UK election so far, but there has been a steady supply of examples from around the world, including in the US where a presidential election looms.

Here are the visual elements to look out for.

Source: The Guardian

New materials for a super-heated world

A floating white fabric over a colorfully edited aerial view of an urban area with green buildings.

As I mentioned I’m reading Lifehouse by Adam Greenfield at the moment, which starts with some stark information about global heating. We need lots of workarounds to combat this, and some new material looks particularly promising.

This new textile could provide at least a little relief. It uses a process called radiative cooling, which describes how objects cool down by radiating thermal energy into their surroundings. Radiative cooling textiles do already exist, but most just reflect the sun’s heat. That “works very well if you’re in an open field,” says Po-Chun Hsu, a molecular engineering professor at the University of Chicago, whose team recently published a paper on their new material in the journal Science. But not in a city.

What those other fabrics don’t do is reflect the ambient heat coming from the street below or a nearby building. The heat coming directly from the sun’s rays and the heat emitted from a sun-baked street aren’t the same; they have different wavelengths. That means a material has to have two different “optical properties” to reflect both.

To do that, the researchers created a three-layer textile. The top layer is made of polymethylpentene or PMP, a type of plastic commonly used for packaging; the researchers had to figure out how to spin it into a fiber. The second is a sheet of silver nanowires, which acts like a mirror to reflect infrared radiation. Together, these block both the solar radiation and the ambient radiation reflected off of surfaces. The third layer can be any conventional fabric, like wool or cotton. Though there are multiple layers, the main thickness comes from the conventional fabric; the top layer is about 1/100th of a human hair.

In outdoor tests in Arizona, the textile stayed 4.1 degrees Fahrenheit (2.3 degrees Celsius) cooler than “broadband emitter” fabrics used for outdoor sports, and 16 F (8.9 C) cooler than regular silk, a breathable fabric often used for dresses and shirts.

Along with clothing, the researchers say this cooling textile could be used on buildings, in cars, or even for food storage and shipping in order to lessen the need for refrigeration, which has a significant climate impact of its own. Next, Hsu’s team is collaborating with other teams to see how the textile could have a health benefit for those in extreme heat conditions.

Source: Fast Company

Eye-contact has a significant impact on interpersonal evaluation, and online job interviews are no exception

Two diagrams of individuals making eye-contact in video conferencing, with differences in focus on the camera and screen.

Maintaining “eye contact” with someone on a video conference call is a bit weird, because it necessitates looking directly into the camera. It’s important, though, otherwise it feels like the other person isn’t looking at you. And that impacts relationships - and, it seems, interpersonal evaluations.

The results indicate interviewers evaluate candidates more positively when their gaze is directed at the camera (i.e., CAM stimulus) compared to when the candidates look at the screen (SKW stimulus). The skewed-gaze stimulus received worse evaluation scores than voice-only presentation (VO stimulus).

Throughout an online interview, it is challenging to maintain “genuine” eye contact—making direct and meaningful visual connection with another person, but gazing into the camera can accomplish a similar feeling online as direct eye contact does in person.

While the evaluators overall preferred interviewees who maintained eye contact with the camera, an unconscious gender bias appeared. Female evaluators judged those with skewed downward gazes more harshly than male evaluators, and the difference in the evaluation of the CAM and SKW stimuli for female interviewees was larger than the male interviewees.

This gender bias within the study could be prevalent under non-experimental conditions. Making both interviewers and interviewees aware of this potentially systematic gender bias could help curtail this issue.

Source: Phys.org

Here is a book as a toolbox to build actual, hard-tacks answers to the crisis of the Long Emergency

Lifehouse book cover

I’ve been very much looking forward to reading Lifehouse: Taking Care of Ourselves in a World on Fire by Adam Greenfield, so I was delighted to discover today that, despite having a release date of 9th July, I could already download the ePUB!

Adam generously featured on an episode during the last season of our podcast, The Tao of WAO and was generous with his time. Go and listen to that to discover what the book’s original title was, and also pre-order the book!

I have a particular suspicion of the kind of book that spends 10 chapters telling the reader the many problems that face the contemporary world, and then follows with a final chapter that offers something - socialism, say - as the simple solution to all our woes. I call this the ‘11 Chapter problem’, and warn every author to avoid this trap. It is often easy to diagnose the problem; it is far harder to think clearly about what we are meant to do about it. More often than not the reader is already well aware of the problems: the reason they pick up a book is to find solutions. And this is why I am so excited about Adam Greenfield’s Lifehouse: Taking Care of Ourselves in a World on Fire. Here is a book as a toolbox to build actual, hard-tacks answers to the crisis of the Long Emergency.

[…]

It starts as a building - a church, a library, a school gym - that is the Lifehouse. This is a place for everyone to go to in an emergency - a flood, fire, or hurricane. It will have a kitchen, beds, clothing storage. But it will also be its own power source - with generators or renewable energy from a wind turbine, or solar panels on the roof. It will also be able to produce its own food with vertical farming technology installed. It will be a tool library that allows for repair and restoration, even outside the emergency - with 3-D printing technology. It will also have a skills library so that the community knows who is a doctor, nurse, teacher or transport.

[…]

But sustaining that effort for the long term - the long emergency itself - is hard. Often communities fail to plan far enough ahead. They split and betray each other. They face insurmountable opposition who wish to take away their autonomy. The Lifehouse is designed with this in mind too. It is not an afterthought, but a deeply considered means in which to think about the future.

Source: Verso Book Club: Lifehouse

A smaller human population will immensely facilitate other transformations we need

The image is a smooth line graph plotting global population projections from 1960 to 2100. The x-axis represents the years, marked in 10-year increments from 1960 to 2100. The y-axis represents population in billions, ranging from 2 to 11. Two distinct curves are shown: a black line representing U.N. projections and a blue line representing an alternate model by Tom Murphy. The black line shows population increasing steadily from around 3 billion in 1960, peaking at about 10.8 billion around the year 2090, and then slightly declining. The blue line shows a similar steady increase until about 2040, peaking at around 8.5 billion before declining sharply to below 4 billion by 2100.

Chart: Population projections from the U.N. (black) and Tom Murphy (blue)

When it’s put as starkly as this, it’s interesting to think about a post-peak human population as something that might happen within my lifetime. The article cited below was linked to from this one by Tom Murphy of UC San Diego, who created the chart I’ve used to accompany this post.

I’m no expert, but Murphy’s reasoning seems reasonable, and I’d assume that the existing right-wing ‘natalism’ is likely to go mainstream within this decade. Interesting times.

Governments worldwide are in a race to see which one can encourage the most women to have the most babies. Hungary is slashing income tax for women with four or more children. Russia is offering women with 10 or more children a “Mother-Heroine” award. Greece, Italy, and South Korea are bribing women with attractive baby bonuses. China has instituted a three-child policy. Iran has outlawed free contraceptives and vasectomies. Japan has joined forces with the fertility industry to infiltrate schools to promote early childbearing. A leading UK demographer has proposed taxing the childless. Religious myths are preventing African men from getting vasectomies. A eugenics-inspired Natal conference just took place in the U.S., a nation leading the way in taking away reproductive rights.

[…]

The alarmism surrounding declining fertility rates is unfounded; it is a positive trend that represents greater reproductive choice, and one that we should accelerate. A smaller human population will immensely facilitate other transformations we need: mitigating climate change, conserving and rewilding ecosystems, making agriculture sustainable, and making communities more resilient and able to integrate more climate and war refugees.

Source: CounterPunch

The Promise and Pitfalls of Decentralised Social Networks

Threads of assorted colours

This paper, ‘Decentralized Social Networks and the Future of Free Speech Online’ explores the potential of decentralized social networks like Mastodon and BlueSky to enhance free speech by shifting control from central authorities to individual users. The author, Ted Huang, examines how decentralisation can promote the free speech values of knowledge, democracy, and autonomy, while also acknowledging the inherent challenges and trade-offs in practical implementation.

Huang highlights that decentralized networks face significant challenges in knowledge verification, effective moderation, and avoiding recentralisation. He notes that the ideal of decentralization often conflicts with practical needs, which necessitates some centralised mechanisms for such things as content moderation and cross-community communication. So, Huang argues, to truly empower users, we need inclusive design processes and ongoing policy discussions.

The decentralized social network has been widely viewed as a cure to its centralized counterpart, which is owned by corporate monopolies, funded by surveillance capitalism, and moderated according to rules made by the few (Gehl 2018, 2-3). The tremendous and unchecked power of those giant platforms was seen as a major threat to people’s rights and freedoms online. The newly emergent decentralized social networks, through infrastructural redesign, create a power-sharing scheme with the end users, so that it is the users themselves, rather than a corporate body, that determine how the communities shall be governed. Such an approach has been hailed as a promising way of curbing the monopolies and empowering the users. It was expected to bring more freedom of speech to individuals, and the vision it underscores – openness rather than walled-gardens, bottom-up rather than top-down – represents the future of the Internet (Ricknell 2020, 115).

[…]

The discussion of the decentralization project is trending, but it is too limited because there lacks systematic and critical review on the project’s normative implications. In particular, the current debate is mostly restricted to the technical circle, without sufficient input and participation from other fields such as policy, law and ethics. So far, researchers on the decentralized social networks mainly focus on their technical difficulties and features, rather than its social implications (Marx & Cheong 2023, 2). Lawmakers and regulators in the world have paid little attention yet to regulating this new technical paradigm (Friedl & Morgan 2024, 8). For decentralized networks to serve as the desirable future of online communications, we need to know why this is so and how it can be achieved. Will decentralized networks better facilitate the free speech online than the centralized platforms? How to design the new space to make it really fit with our value commitments? All the utopian and dystopian analyses of the decentralized future are only possibilities: what matters is the choices we make about how these technologies are designed and used (Cohnh & Mir 2022). Value commitments must be carefully examined and considered in the design process.

Source: arXiv

Image: Omar Flores

If we don’t change course, most people in the U.S. will have some flavor of Long COVID of one sort or another

Illustration of a person with dark brown hair wearing a red face mask, surrounded by stylized coronavirus icons and red lightning bolts.

For the past few years, I’ve been on the list of ‘vulnerable’ people who get a free booster Covid vaccine due to my asthma. That’s no longer the case, but Covid is still around, and mutating.

This interview with someone who, admittedly, runs a Covid testing company, has made me think that perhaps I need to pay for a private vaccine because I really don’t want Long Covid. Even the venerable Venkatesh Rao has written about the cognitive impact that he suspects Covid has had on him.

Dr. Phillip Alvelda, a former program manager in DARPA’s Biological Technologies Office that pioneered the synthetic biology industry and the development of mRNA vaccine technology, is the founder of Medio Labs, a COVID diagnostic testing company. He has stepped forward as a strong critic of government COVID management, accusing health agencies of inadequacy and even deception. Alvelda is pushing for accountability and immediate action to tackle Long COVID and fend off future pandemics with stronger public health strategies.

[…]

PA: There are all kinds of weird things going on that could be related to COVID’s cognitive effects. I’ll give you an example. We’ve noticed since the start of the pandemic that accidents are increasing. A report published by TRIP, a transportation research nonprofit, found that traffic fatalities in California increased by 22% from 2019 to 2022. They also found the likelihood of being killed in a traffic crash increased by 28% over that period. Other data, like studies from the National Highway Traffic Safety Administration, came to similar conclusions, reporting that traffic fatalities hit a 16-year high across the country in 2021. The TRIP report also looked at traffic fatalities on a national level and found that traffic fatalities increased by 19%.

[…]

Damage from COVID could be affecting people who are flying our planes, too. We’ve had pilots that had to quit because they couldn’t control the airplanes anymore. We know that medical events among U.S. military pilots were shown to have risen over 1,700% from 2019 to 2022, which the Pentagon attributes to the virus.

[…]

PA: What does this look like if we continue on the way we are doing right now? What is the worst-case scenario? Well, I think there are two important eventualities. So we’re what, four years in? Most people have had COVID three and a half times on average already. After another four years of the same pattern, if we don’t change course, most people in the U.S. will have some flavor of Long COVID of one sort or another.

Source: Institute for New Economic Thinking

An inferior, or at least grossly limited version of intelligence

An old red bulb hanging amongst the rafters at Copenhagen Street Food Market.

Audrey Watters cites the work of Zoë Schlanger who asks what we mean by ‘intelligence’. This is important, of course, because we’re often prepending the word ‘artificial’ to it, meaning that we’re foregrounding something and backgrounding something else. It’s a zeugma.

Really, it’s intelligence I’ve been thinking about lately, as I’ve been reading Zoë Schlanger’s new book The Light Eaters: How the Unseen World of Plant Intelligence Offers a New Understanding of Life on Earth.

[…]

What do we mean, Schlanger asks, by “intelligence”? What behaviors indicate that an organism, plant or otherwise, is “thinking”? How does one “think” without a nervous system, without a brain? Some of the reasons why we’ve answered these questions in such a way to deny plant intelligence can be traced, of course, to ancient Greece and to the Aristotelian insistence that thinking is the purview solely of Man. It’s a particular kind of thinking too that is privileged in this definition: rationality.

And it’s that form of “thinking,” of “intelligence” that is privileged in the discussions about artificial intelligence. It’s actually an inferior, or at least grossly limited version of intelligence, if it’s intelligence at all — the idea that entities, animal or machine, are programmed, coded. It’s so incredibly limiting.

Source: Second Breakfast

Image: Shane Rounce

F L A M I N G O N E

Flamingo on a sandy beach with an award text overlay

Miles Astray is a photographer who recently won the People’s Vote and a Jury Award in the artificial intelligence category of 1839 Awards. The photo is of a flamingo whose head is apparently missing.

The twist: the photo is as real as the simple belly scratch the bird is busy with.

With AI-generated content remodelling the digital landscape rapidly while sparking an ever-fiercer debate about its implications for the future of content and the creators behind it – from creatives like artists, journalists, and graphic designers to employees in all sorts of industries – I entered this actual photo into the AI category of 1839 Awards to prove that human-made content has not lost its relevance, that Mother Nature and her human interpreters can still beat the machine, and that creativity and emotion are more than just a string of digits.

After seeing recent instances of AI-generated imagery outshining actual photos in competitions, it occurred to me that I could twist this story inside down and upside out the way only a human could and would, by submitting a real photo into an AI competition. My work F L A M I N G O N E was the perfect candidate because it’s a surreal and almost unimaginable shot, and yet completely natural. It is the first real photo to win an AI award.

Source: Miles Astray

It's all just one big ocean

A watercolour image showing how all of the world's oceans are connected

Source: Fix The News

Credit: Natalie Renier/Woods Hole Oceanograpic Instition

The writer’s equivalent of what in computer architecture is called speculative execution

'World as it actually is' with multiple lines and blobs. Slightly chaotic.

As ever with Venkatesh Rao’s posts, there’s a lot going on with this one. Ostensibly, it’s about the third anniversary of the most recent iteration of his newsletter, but along the way he discusses the current state of the world. It’s a post worth reading for the latter reason, but I’m focused here on the role of writing and publishing online, which I do quite a lot.

Rao tries to fit his posts into one of five narrative scaffoldings which cover different time spans. Everything else falls by the wayside. I do the opposite: just publishing everything so I’ve got a URL for all of the thoughts, and I can weave it together on demand later. I think Cory Doctorow is a bit more like that, too (although more organised and a much better writer than me!)

As a writer, you cannot both react to the world and participate in writing it into existence — the bit role in “inventing the future” writers get to play — at the same time. My own approach to resolving this tension has been to use narratives at multiple time scales as scaffolding for sense-making. Events that conform (but not necessarily confirm) to one or more of my narratives leave me with room to develop my ab initio creative projects. Events that either do not conform to my narratives or simply fall outside of them entirely tend to derail or drain my creative momentum. It is the writer’s equivalent of what in computer architecture is called speculative execution. If you’re right enough, often enough, as a writer, you can have your cake and eat it too — react to the world, and say what you want to say at the same time.

[…]

Writing seemed like a more culturally significant, personally satisfying, aesthetically appropriate, and existentially penetrating thing to be doing in 2014 than it does now in 2024. I think we live in times when writing has less of a role to play in inventing the future, for a variety of reasons. You have to work harder at it, for less reward, in a smaller role. Fortunately for my sanity, writing is not the only thing I do with my life.

[…]

Maybe we’re just at the end of a long arc of 25 years or so, when writing online was exceptionally culturally significant and happened to line up with my most productive writing years, and the other shoe has dropped on the story of “blogging.”

Source: Ribbonfarm Studio

It's impossible to 'hang out' on the internet, because it is not a place

Two young people sitting on the ground with their backs to a car, sharing an earbud each

I spend a lot of time online, but do I ‘hang out’ there? I certainly hang out with people playing video games, but that’s online rather than on the internet. Drew Austin argues that because of the amount of money and algorithms on the internet, it’s impossible to hang out there.

I’m not sure. It depends on your definition of ‘hanging out’ and it also depends whether you’re just focusing on mainstream services, or whether you’re including the Fediverse and niche things such as School of the Possible. The latter, held every Friday by Dave Grey, absolutely is ‘hanging out’, but whether Zoom calls with breakout rooms count as the internet depends on semantics, I guess.

Is “hanging out” on the internet truly possible? I will argue: no it’s not. We’re bombarded with constant thinkpieces about various social crises—young people are sad and lonely; culture is empty or flat or simply too fragmented to incubate any shared meaning; algorithms determine too much of what we see. Some of these essays even note our failure to hang out. The internet is almost always an implicit or explicit villain in such writing but it’s increasingly tedious to keep blaming it for our cultural woes.

Perhaps we could frame the problem differently: The internet doesn’t have to demand our presence the way it currently does. It shouldn’t be something we have to look at all time. If it wasn’t, maybe we’d finally be free to hang out.

[…]

How many hours have been stolen from us? With TV, we at least understood ourselves to be passive observers of the screen, but the interactive nature of the internet fostered the illusion that message boards, Discord servers, and Twitter feeds are digital “places” where we can in fact hang out. If nothing else, this is a trick that gets us to stick around longer. A better analogy for online interaction, however, is sitting down to write a letter to a friend—something no one ever mistook for face-to-face interaction—with the letters going back and forth so rapidly that they start to resemble a real-time conversation, like a pixelated image. Despite all the spatial metaphors in which its interfaces have been dressed up, the internet is not a place.

Source: Kneeling Bus

Image: Wesley Tingey

'Wet streets cause rain' stories

Digital artwork of a brain surrounded by a network of interconnected nodes and icons, including social media and technology symbols.

First things first, the George Orwell quotation below is spurious, as the author of this article, David Cain, points out at the end of it. The point is that, it sounds plausible, so we take it on trust. It confirms our worldview.

We live in a web of belief, as W.V. Quine put it, meaning that we easily accept things that confirm our core beliefs. And then, with beliefs that are more peripheral, we pick them up and put them down at no great cost. Finding out that the capital of Burkina Faso is Ouagadougou and not Bobo-Dioulasso makes no practical difference to my life. It would make a huge difference to the residents of either city, however.

I don’t like misinformation, and I think we’re in quite a dangerous time in terms of how it might affect democratic elections. However, it has always been so. Gossip, rumour, and straight up lies have swayed human history. The thing is that, just as we are able to refute poor journalism and false statements on social networks about issues we know a lot about, so we need to be a bit skeptical about things outside of our immediate knowledge.

After all, as Cain quotes Michael Crichton as saying, there are plenty of ‘wet streets cause rain’ stories out there, getting causality exactly backwards — intentionally or otherwise.

Consider the possibility that most of the information being passed around, on whatever topic, is bad information, even where there’s no intentional deception. As George Orwell said, “The most fundamental mistake of man is that he thinks he knows what’s going on. Nobody knows what’s going on.”

Technology may have made this state of affairs inevitable. Today, the vast majority of person’s worldview is assembled from second-hand sources, not from their own experience. Second-hand knowledge, from “reliable” sources or not, usually functions as hearsay – if it seems true, it is immediately incorporated into one’s worldview, usually without any attempt to substantiate it. Most of what you “know” is just something you heard somewhere.

[…]

It makes perfect sense, if you think about it, that reporting is so reliably unreliable. Why do we expect reporters to learn about a suddenly newsworthy situation, gather information about it under deadline, then confidently explain the subject to the rest of the nation after having known about it for all of a week? People form their entire worldviews out of this stuff.

[…]

People do know things though. We have airplanes and phones and spaceships. Clearly somebody knows something. Human beings can be reliable sources of knowledge, but only about small slivers of the whole of what’s going on. They know things because they deal with their sliver every day, and they’re personally invested in how well they know their sliver, which gives them constant feedback on the quality of their beliefs.

Source: Raptitude

Dividers tell the story of how they’ve renovated their houses, becoming architects along the way. Continuers tell the story of an august property that will remain itself regardless of what gets built.

Four-panel illustration showing different life stages on a tree: a child sitting, a teenager climbing, an adult leaning, and an older person sitting.

This long article in The New Yorker is based around the author wondering whether the fun he’s had playing with his four year-old will be remembered by his son when he grows up.

Wondering whether you are the same person at the start and end of your life was a central theme of a ‘Mind, Brain, and Personal Identity’ course I did as part of my Philosophy degree around 22 years ago. I still think about it. On the one hand is the Ship of Theseus argument, where you can one-by-one replace all of the planks of a ship, but it’s still the same ship. If you believe it’s the same ship, and believe that you’re the same person as when you were younger, then the author of this article would call you a ‘Continuer’.

On the other hand, if you think that there are important differences between the person you are now and when you were younger. If, for example, the general can’t remember ‘going over the top’ as a young man, despite still having the medal to prove it, is he the same person? If you don’t think so, then perhaps you are a ‘Divider’.

I don’t consider it so clean cut. We tell stories about ourselves and others, and these shape how we think. For example, going to therapy five years ago helped me ‘remove the mask’ and reconsider who I am. That involved reframing some of the experiences in my life and realising that I am this kind of person rather than that kind of person.

It’s absolutely fine to have seasons in your life. In fact, I’m pretty sure there’s some ancient wisdom to that effect?

Are we the same people at four that we will be at twenty-four, forty-four, or seventy-four? Or will we change substantially through time? Is the fix already in, or will our stories have surprising twists and turns? Some people feel that they’ve altered profoundly through the years, and to them the past seems like a foreign country, characterized by peculiar customs, values, and tastes. (Those boyfriends! That music! Those outfits!) But others have a strong sense of connection with their younger selves, and for them the past remains a home. My mother-in-law, who lives not far from her parents’ house in the same town where she grew up, insists that she is the same as she’s always been, and recalls with fresh indignation her sixth birthday, when she was promised a pony but didn’t get one. Her brother holds the opposite view: he looks back on several distinct epochs in his life, each with its own set of attitudes, circumstances, and friends. “I’ve walked through many doorways,” he’s told me. I feel this way, too, although most people who know me well say that I’ve been the same person forever.

[…]

The philosopher Galen Strawson believes that some people are simply more “episodic” than others; they’re fine living day to day, without regard to the broader plot arc. “I’m somewhere down towards the episodic end of this spectrum,” Strawson writes in an essay called “The Sense of the Self.” “I have no sense of my life as a narrative with form, and little interest in my own past.”

[…]

John Stuart Mill once wrote that a young person is like “a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.” The image suggests a generalized spreading out and reaching up, which is bound to be affected by soil and climate, and might be aided by a little judicious pruning here and there.

Source: The New Yorker

Can’t access it? Try Pocket or Archive Buttons

The iPhone effect, if it was ever real in the first place, is certainly not real now.

Apple logo over an Apple store

It’s announcement time at Apple’s WWDC. And apart from trying to rebrand AI as “Apple Intelligence” I haven’t seen many people get very excited about it. MKBHD has an overview if you want to get into the details. I just use macOS without iCloud because everything works and my Mac Studio is super-fast.

Ryan Broderick has a word for Apple fanboys, who seem to think that everything they touch is gold. Seems like their Vision Pro hasn’t brought VR mainstream, and after the more innovative Steve Jobs era, it seems like they’re more happy to be a luxury brand that plays it relatively safe.

If you press Apple fanboys about their weird revisionist history, they usually pivot to the argument that while iOS’s marketshare has essentially remained flat for a decade, their competitors copy what they do and that trickles down into popular culture from there. Which I’m not even sure is true either. Android had mobile payments three years before Apple, had a smartwatch a year before, a smart speaker a year before, and launched a tablet around the same time as the iPad. We could go on and on here.

And, I should say, I don’t actually think Apple sees themselves as the great innovator their Gen X blogger diehards do. In the 2010s, they shifted comfortably from a visionary tastemaker, at least aesthetically, into something closer to an airport lounge or a country club for consumer technology. They’ll eventually have a version of the new thing you’ve heard about, once they can rebrand it as something uniquely theirs. It’s not VR, it’s “spatial computing,” it’s not AI, it’s “Apple Intelligence”. But they’re not going to shake the boat. They make efficiently-bundled software that’s easy to use (excluding iPadOS) and works well across their nice-looking and easy-to-use devices (excluding the iPad). Which is why Apple Intelligence is not going to be the revolution the AI industry has been hoping for. The same way the Vision Pro wasn’t. The iPhone effect, if it was ever real in the first place, is certainly not real now.

Source: Garbage Day

The latest Hardcore History just dropped

A digitally composed image featuring a woman holding a white owl, dressed in an ancient gold-toned attire, beside a goblet, with mystical forest elements and a lightning strike in the background, evoking a mythical atmosphere. The text 'Dan Carlin's Mania for Subjugation' floats above in distressed red lettering.

I could listen to Dan Carlin read the phone book all day, so to read the announcement that his latest multi-part (and multi-hour!) series for the Hardcore History podcast has started is great news!

So, after almost two decades of teasing it, we finally begin the Alexander the Great saga.

I have no idea how many parts it will turn out to be, but we are calling the series “Mania for Subjugation” and you can get the first installment HERE. (of course you can also auto-download it through your regular podcast app).

[…]

And what a story it is! My go-to example in any discussion about how truth is better than fiction. It is such a good tale and so mind blowing that more than 2,300 years after it happened our 21st century people still eagerly consume books, movies, television shows and podcasts about it. Alexander is one of the great apex predators of history, and he has become a metaphor for all sorts of Aesop fables-like morals-to-the-story about how power can corrupt and how too much ambition can be a poison.

Source: Look Behind You!

The logical conclusion of rich, isolated computer programmers having ketamine orgies with each other

Geometric crystal with a sparkling eruption of pink particles on a soft pink background, resembling a stylized, fantastical display.

Ryan Broderick with a reality check about OpenAI and GenAI in general:

I think this [Effective Altruists vs effective accelerationists debate] is all very silly. I also think this the logical conclusion of rich, isolated computer programmers having ketamine orgies with each other. But it does, unfortunately, underpin every debate you’re probably seeing about the future of AI. Silicon Valley’s elite believe in these ideas so devoutly that Google is comfortable sacrificing its own business in pursuit of them. Even though EA and e/acc are effectively just competing cargo cults for a fancy autocorrect. Though, they also help alleviate some of the intense pressure huge tech companies are under to stay afloat in the AI arms race. Here’s how it works.

[…]

Analysts told The Information last year that OpenAI’s ChatGPT is possibly costing the company up to $700,000 a day to operate. Sure, Microsoft invested $13 billion in the company and, as of February, OpenAI was reportedly projecting $2 billion in revenue, but it’s not just about maintaining what you’ve built. The weird nerds I mentioned above have all decided that the finish line here is “artificial general intelligence,” or AGI, a sentient AI model. Which is actually very funny because now every major tech company has to burn all of their money — and their reputations — indefinitely, as they compete to build something that is, in my opinion, likely impossible (don’t @ me). This has largely manifested as a monthly drum beat of new AI products no one wants rolling out with increased desperation. But you know what’s cheaper than churning out new models? “Scaring” investors.

[…]

This is why OpenAI lets CEO Sam Altman walk out on stages every few weeks and tell everyone that its product will soon destroy the economy forever. Because every manager and executive in America hears that and thinks, “well, everyone will lose their jobs but me,” and continues paying for their ChatGPT subscription. As my friend Katie Notopoulos wrote in Business Insider last week, it’s likely this is the majority of what Altman’s role is at OpenAI. Doomer in chief.

[…]

I’ve written this before, but I’m going to keep repeating it until the god computer sends me to cyber hell: The “two” “sides” of the AI “debate“ are not real. They both result in the same outcome — an entire world run by automations owned by the ultra-wealthy. Which is why the most important question right now is not, “how safe is this AI model?” It’s, “do we need even need it?”

Source: Garbage Day

Image: Google DeepMind

In the English language, a human alone has distinction while all other living beings are lumped with the nonliving “its.”

Two jaguars lounging on a moss-covered tree branch in a misty tropical forest, surrounded by dense vegetation.

I posted on social media recently that I want more verbs and fewer nouns in my life. This article, via Dense Discovery backs this sentiment up, with reference to the author of Braiding Sweetgrass' indigenous heritage.

Grammar, especially our use of pronouns, is the way we chart relationships in language and, as it happens, how we relate to each other and to the natural world.

[…]

[…]

We have a special grammar for personhood. We would never say of our late neighbor, “It is buried in Oakwood Cemetery.” Such language would be deeply disrespectful and would rob him of his humanity. We use instead a special grammar for humans: we distinguish them with the use of he or she, a grammar of personhood for both living and dead Homo sapiens. Yet we say of the oriole warbling comfort to mourners from the treetops or the oak tree herself beneath whom we stand, “It lives in Oakwood Cemetery.” In the English language, a human alone has distinction while all other living beings are lumped with the nonliving “its.”

There are words for states of being that have no equivalent in English. The language that my grandfather was forbidden to speak is composed primarily of verbs, ways to describe the vital beingness of the world. Both nouns and verbs come in two forms, the animate and the inanimate. You hear a blue jay with a different verb than you hear an airplane, distinguishing that which possesses the quality of life from that which is merely an object.

[…]

Linguistic imperialism has always been a tool of colonization, meant to obliterate history and the visibility of the people who were displaced along with their languages… Because we speak and live with this language every day, our minds have also been colonized by this notion that the nonhuman living world and the world of inanimate objects have equal status. Bulldozers, buttons, berries, and butterflies are all referred to as it, as things, whether they are inanimate industrial products or living beings.

Source: Orion Magazine

Oblivion doesn’t just mean eradication: it is erasure

The CASPER super-computer from Neon Genesis Evangelion (1996).

If you haven’t come across the The New Design Congress before, I highly suggest reading their essays and research notes, and subscribing to their newsletter. The following is an excerpt from their most recent issue:

It is not only a gluttony for energy that animates Big and Small Tech, but also social legitimacy. Here, oblivion doesn’t just mean eradication: it is erasure. This manifests in the social burden of the so-called ‘unintended consequences’ of technology. There is much concern to hold regarding the deployment of digitised forms of identification, including so-called decentralised and self-sovereign ones. Feasible only at immense scale, their proposed reliance on power-hungry blockchains so susceptible to scams, frauds and wastefulness is but one issue. Digital identities sketch schizophrenic futures made of radical self-custody combined with naive market-based ecosystems of private identity managers. This assetisation is backed by a trust mechanism bound to become the mother of all social engineering attack vector, relying as it does on idealist claims of identity. If trustworthiness within a digital identity system can be defined as that which is necessary to permit access, it can also be defined as that which necessarily breaks security policies. In the US and UK, voter ID is already an efficient weapon for reactionary power structures to fight off democratic participation, particularly of minorities. No actors in the field has seriously reckoned with such socio-technical weaponisation of their tech stack.

As we etched in the previous Cable, another world is possible. One where new modes of self- and interpersonal recognition are developed from a posture of conciliation, rather than a fragile and vampiric extraction of socially-shared goods. The challenge now is sifting through the gold rush, to find systems that are capable of fulfilling this promise.

Source: CABLE 2024/03-05

65% of UK adults aged 18-35 support “a strong leader who doesn’t have to bother with parliamentary elections”

A collage of Conservative politicians by Cold War Steve

I wouldn’t usually link to UnHerd, but the figure quoted here is taken from a tweet by Rory Stewart, who I do trust. I’m hugely concerned about creeping authoritarianism, and so why I’m dead against the Tories, I can’t see how an even further-right party in the guise of Reform UK is something to be celebrated.

While I get the desire to have someone to sort things out, the way we do so is together using systems. Not by electing a tough-talking figurehead who dispenses with elections.

It is no wonder that, after a generation of Conservative Party rule, 46% of British adults now support “a strong leader who doesn’t have to bother with parliamentary elections”, a figure which rises to 65% of those aged 18-35. By Gove’s definition, the majority of the electorate will soon be composed of extremists: this widening gulf between the governing and the governed is not a recipe for political stability.

Source: UnHerd

Image: Cold War Steve

Podcasts worth listening to

Black headphones on a yellow background

TIME has a list of the ‘best podcasts of 2024 so far’. 99% Invisible is great, but my favourite podcasts are nowhere to be seen on here, not to mention my own with Laura, The Tao of WAO! I love podcasts, and listen to them while running, in the gym, washing dishes, in the car, mowing the lawn… wherever,

You can download an OPML file (?) of all of the shows I subscribe via my Open Source app of choice, AntennaPod. My favourites at the moment though, in alphabetical order, are:

  • Dan Carlin’s Hardcore History
  • No Such Thing As A Fish
  • The Art of Manliness
  • The Rest is Politics
  • You Are Not So Smart

It’s getting increasingly difficult to discover new treasures in the cacophonous world of podcasting. There are a lot of shows—many of them not good. It seems every week a new celebrity announces a podcast in which they ask other celebrities out-of-touch questions or revisit their own network sitcom heyday. And studios continue to scrounge for the most morally dubious true-crime topics they can find.

Source: TIME

Image: C D-X

The theory of 'a rising tide lifts all boats' does not work when you allow the people with the most influence to buy their way out of the water

Blue water

I agree with this so much. I’ve had jobs where I’ve been entitled to private health insurance, which I’ve turned down because I want to have a stake in the success and continued existence of the National Health Service. I’d love a world where I don’t have to have a car because public transport is ubiquitous. I would never send my kids to private school, and am delighted that Labour have announced that, if they get into power, they’ll raise money for public schools from taxing private schools like the businesses they are.

One of the most direct ways to improve a flawed system is simply to end the ability of rich and powerful people to exclude themselves from it. If, for example, you outlawed private schools, the public schools would get better. They would get better not because every child deserves to have a quality education, but rather because it would be the only for rich and powerful people to ensure that their children were going to good schools. The theory of “a rising tide lifts all boats” does not work when you allow the people with the most influence to buy their way out of the water. It would be nice if we fixed broken systems simply because they are broken. In practice, governments are generally happy to ignore broken things if they do not affect people with enough power to make the government listen. So the more people that we push into public systems, the better.

Rich kids should go to public schools. The mayor should ride the subway to work. When wealthy people get sick, they should be sent to public hospitals. Business executives should have to stand in the same airport security lines as everyone else. The very fact that people want to buy their way out of all of these experiences points to the reason why they shouldn’t be able to. Private schools and private limos and private doctors and private security are all pressure release valves that eliminate the friction that would cause powerful people to call for all of these bad things to get better. The degree to which we allow the rich to insulate themselves from the unpleasant reality that others are forced to experience is directly related to how long that reality is allowed to stay unpleasant. When they are left with no other option, rich people will force improvement in public systems. Their public spirit will be infinitely less urgent when they are contemplating these things from afar than when they are sitting in a hot ER waiting room for six hours themselves.

Source: How Things Work

Image: Daniel Sinoca

TikTok as spectacle

A black and white illustration of a disinterested woman walking through a store with shelves, and a satirical comment about boredom near the top-right.

Audrey Watters links to this post by Rob Horning which talks about sports, social media, AI, and Guy Debord. So pretty much catnip for me.

I’m just going to share the part about TikTok and Debord’s ‘spectacle’. It’s worth reading the rest of it for how Horning then goes on to apply this to LLMs such as GPT-4o and the semblance of doing rather than simply watching and consuming.

The way TikTok conflates experience with voyeurism makes it a somewhat clear demonstration of Guy Debord’s “society of the spectacle.” Debord argues that under the conditions of late 20th century capitalism — conditions of media centricity and monopoly that have only intensified into our century — spectacle and lived experience are in a complex dialectic that sustains a generalized alienation and a universal reification. “It is not just that the relationship to commodities is now plain to see, commodities are now all that there is to see; the world we see is the world of the commodity.” Debord concludes that individuals are “condemned to the passive acceptance of an alien everyday reality” and are driven to “resorting to magical devices” to “entertain the illusion” of “reacting to this fate.” TikTok could be considered as one of those magical devices (along with the phone in its entirety) that manages that dialectic. Under the guise of “entertainment,” passivity reappears to the entertained individual as a kind of perfected agency; alienation is redeemed as the requisite precursor to consumer delectation.

The way TikTok conflates experience with voyeurism makes it a somewhat clear demonstration of Guy Debord’s “society of the spectacle.” Debord argues that under the conditions of late 20th century capitalism — conditions of media centricity and monopoly that have only intensified into our century — spectacle and lived experience are in a complex dialectic that sustains a generalized alienation and a universal reification. “It is not just that the relationship to commodities is now plain to see, commodities are now all that there is to see; the world we see is the world of the commodity.” Debord concludes that individuals are “condemned to the passive acceptance of an alien everyday reality” and are driven to “resorting to magical devices” to “entertain the illusion” of “reacting to this fate.” TikTok could be considered as one of those magical devices (along with the phone in its entirety) that manages that dialectic. Under the guise of “entertainment,” passivity reappears to the entertained individual as a kind of perfected agency; alienation is redeemed as the requisite precursor to consumer delectation.

“The spectacle is essentially tautological,” Debord writes, “for the simple reason that its means and ends are identical. It is the sun that never sets on the empire of modern passivity. It covers the entire globe, basking in the perpetual warmth of its own glory.”

Source: Internal exile

Absurd design

A pen that is also like an arm and hand

We were using the CC0-licensed Humaaans for some work this week when the client decided they didn’t particularly like them. When searching for alternatives, I stumbled across Absurd Designs which doesn’t work any better, but which I’ve used for a couple of posts on my personal blog.

What about absurd illustrations for your projects? Take every user on an individual journey through their own imagination.

Source: Absurd Designs

The AI Egg

Four nested ovals in descending size labeled with types of organizational changes: 'Context change,' 'Changes to mission,' 'New processes and ways of working,' and 'Efficiencies with existing workflows,' in orange to purple to teal colors.

Dan Sutch, who I’ve known at this point in various roles for around 17 years, introduces the ‘AI Egg’ to make sense different perspectives / discussion contexts, for Generative AI.

I think it bears more than a passing resemblance to the SAMR model, which focuses on educational technology transformation. I talked about that last year in relation to AI. What can I say, it’s a curse being ahead of the curve.

We’ve held thousands of conversations with charities, trusts and foundations, digital agencies and community groups discussing the opportunities and challenges of Generative AI (GenAI). One thing we’ve learnt is that the scale and speed of the changes means there are thousands more conversations to have (and much more action too). The reason is that there are many discussions, debates (and again, action!) to be had at multiple levels, because of the scale of the implications of GenAI.

Source: CAST Writers

What is systems thinking?

Person looking at a slide entitled 'What is a system?'

I only found out about this online event featuring Gerald Midgley shortly before it occurred. I couldn’t make it, but I’m glad there’s a recording so that I can watch it at some point as part of my learning around systems thinking.

In this post, Andrew Curry, whose newsletter is well worth subscribing to, summarises the main points that Midgley made on his blog.

I’m a member of the Agri-Foods for Net Zero network, and it runs a good series of knowledge sharing events. (I’ve written about AFNZ here before). Last month it invited one of Britain’s leading systems academics, Gerald Midgley, to do an introductory talk on using systems thinking to explore complex problems.

[…] The questions he addressed were:

  • What are highly complex problems?

  • What is systems thinking?

  • Different systems approaches for different purposes

Source: The Next Wave

Alone time

Person by themselves

I haven’t spent enough time alone recently. I need to get back out into the mountains with my tent.

Neuroscientists have discovered that, regardless of your clinical label, those of us who prefer solitude have something in common. We tend to have low levels of oxytocin in our brains, and higher levels of vasopressin. That’s the recipe for introverts and recluses, even hermits. Michael Finkel talks about this brain profile in his book The Stranger in The Woods, about a hermit named Christopher Knight. He lived in the backwoods of Maine for nearly three decades, living off goods pillaged from cabins and vacation homes. He terrified residents, but nobody could ever find him.

When police finally found Knight, they were shocked. The guy was in nearly perfect mental and physical health. Locals didn’t believe his story. They expected the Unabomber. Instead, Knight turned out to be a pleasant guy who loved reading. He was easy to get along with. He had no grudge against society. Therapists got exhausted trying to diagnose him and gave up. “I diagnose him as a hermit,” they said.

[…]

Society doesn’t leave hermits alone. They’re doing everything they can to force social interaction on everyone. They insist it’s good for you, ignoring the evidence that solitude can benefit people, lowering their blood pressure and even encouraging brain cell growth. It just so happens that social activity drives this twisted economy.

It makes sense why you want to be alone.

Source: OK Doomer

The effort required to maintain internally consistent and intellectually honest positions in the current environment is daunting

Overhead view of a busy indoor area with blurred figures walking, capturing the motion and activity of a crowded public space.

Albert Wenger, the only Venture Capitalist I pay any attention to, writes on his blog that… he misses writing on his blog. He talks about a couple of reasons for this, the first of which is the usual excuse of “being too busy”.

It’s the second reason that interests me, though, especially as I feel the same futility:

[T]he world is continuing to descend back into tribalism. And it has been exhausting trying to maintain a high rung approach to topics amid an onslaught of low rung bullshit. Whether it is Israel-Gaza, the Climate Crisis or Artificial Intelligence, the online dialog is dominated by the loudest voices. Words have been rendered devoid of meaning and reduced to pledges of allegiance to a tribe. I start reading what people are saying and often wind up feeling isolated and exhausted. I don’t belong to any of the tribes nor would I want to. But the effort required to maintain internally consistent and intellectually honest positions in such an environment is daunting. And it often seems futile.

Source: Continuations

Image: Timon Studler

3 strategies to counter the unseen costs of boundary work within organisations

A metal slinky toy forms an arch between two white, matte surfaces under soft gradient lighting.

This article focuses on research that reveals people who do ‘boundary work’ within organisations, that is to say, individuals who span different silos, are more likely to suffer burnout and exhibit negative social behaviours.

The researchers used “field data, surveys, and experiments involving more than 2,000 working adults across two countries” and found that there are three ways that organisations can reap the benefits of this boundary work while mitigating the downsides:

  1. Strategically integrate cross-silo collaboration into formal roles (i.e. acknowledge their role as “cross-team, cross-function collaborator[s]")
  2. Provide adequate resources (e.g. training programmes and tools for collaboration, but also reward and recognition)
  3. Develop multifaceted check-in mechanisms and provide opportunities to disengage (i.e. gain feedback in multiple ways to gauge when boundary spanners need additional space and/or support)

While past research has documented many benefits of boundary-spanning, we suspected that individuals collaborating across silos may be faced with higher levels of cognitive and emotional demands, which could lead to higher levels of burnout. We also wanted to understand if the exhaustion and burnout they faced may lead to abusive behavior toward others.

[…]

Cross-silo collaboration is a double-edged sword in the modern workplace. While it undeniably serves as a catalyst for expedited coordination and innovation, it can adversely affect the well-being of those who engage in it. The good news is that organizations can adopt a multifaceted approach to support their boundary-spanning employees.

Source: Harvard Business Review

(non-paywalled version)

Just because we cannot imagine a future does not mean it cannot happen

A diagram illustrating the various stages of future forecasting, with a timeline extending from the present into a widening cone divided into layers labeled as 'Projected', 'Probable', 'Plausible', and 'Preposterous', indicating different likelihoods of future events based on current knowledge and trends, adapted from Voros (2003).

I came across this post yesterday in which I was interested primarily for the graphic. The author didn’t specifically acknowledge the source, but I found Joseph Voros' blog in which he explains how he came up with what he calls The Futures Cone:

The above descriptions are best considered not as rigidly-separate categories, but rather as nested sets or nested classes of futures, with the progression down through the list moving from the broadest towards more narrow classes, ultimately to a class of one — the ‘projected’. Thus, every future is a potential future, including those we cannot even imagine — these latter are outside the cone, in the ‘dark’ area, as it were. The cone metaphor can be likened to a spotlight or car headlight: bright in the centre and diffusing to darkness at the edge — a nice visual metaphor of the extent of our futures ‘vision’, so to speak. There is a key lesson to the listener when using this metaphor—just because we cannot imagine a future does not mean it cannot happen…

Source: The Voroscope

Man or bear IRL

A camper adjusts their gear on a touring bicycle next to a tent with the shadow of another bike on it in an open field at sunset, with distant mountains and a cloudy sky in the background.

This article by Laura Killingbeck is definitely worth reading in its entirety. Not only is it extremely well-written, it gives a real-world example to a hypothetical internet discussion. Killingbeck is a long-term ‘bikepacker’ and therefore the “man or bear” question is one she grapples with on a regular basis.

The central reason why fewer women travel alone is our fear of male violence and sexual assault. Actually, the most common question I get about my travels is some version of, “Aren’t you afraid to bike/hike/travel alone as a woman?” By naming my gender, the implication is clear. What people really mean is, “Aren’t you afraid of men?”

This leads us straight back to the original conversation about “Man or Bear,” which has nothing to do with bears. (Sorry, bears!) “Would you rather be stuck in a forest with a man or a bear?” is just another way of asking, “Are you afraid of men?” It’s the same question I’ve been fielding for the entirety of my life as a solo female traveler. It’s the same question that hovers over women all the time as we move through the world.

And it’s a question that’s always been difficult for me to answer. I’m not afraid of all men. But I am afraid of some men. The real problem is the gray area in between and what it takes to manage the murkiness of that unknown.

Source: Bikepacking

AI is infecting everything

Google search result for 'can i use gasoline in cooking spaghetti' answering that no you can't but you can use it in a spaghetti recipe (and then making up a recipe)

Imagine how absolutely terrified of competition Google must be to be to put the output from the current crop of LLMs front-and-centre in their search engine, which dominates the market.

This post collates some of the examples of the ‘hallucinations’ that have been produced. Thankfully, AFAIK it’s only available in North America at the moment. It’s all fun and games, I guess, until someone seeking medical advice dies.

Also, this is how misinformation is likely to even worse - not necessarily by people being fed conspiracy theories through social media, but by being encouraged to use an LLM-powered search engine trained on them.

Google tested out AI overviews for months before releasing them nationwide last week, but clearly, that wasn’t enough time. The AI is hallucinating answers to several user queries, creating a less-than-trustworthy experience across Google’s flagship product. In the last week, Gizmodo received AI overviews from Google that reference glue-topped pizza and suggest Barack Obama was Muslim.

The hallucinations are concerning, but not entirely surprising. Like we’ve seen before with AI chatbots, this technology seems to confuse satire with journalism – several of the incorrect AI overviews we found seem to reference The Onion. The problem is that this AI offers an authoritative answer to millions of people who turn to Google Search daily to just look something up. Now, at least some of these people will be presented with hallucinated answers.

Source: Gizmodo

Electronic spider silk

Two fingers with the very fine electronic spider silk wrapped around

This looks promising! As with everything like this, though, the more data we capture about the body, the more we need robust privacy legislation and security protocols.

Also, I’m pretty sure this could be printed as a form of tattoo on the surface of the human skin, so it could be art as well as science.

Researchers have developed a method to make adaptive and eco-friendly sensors that can be directly and imperceptibly printed onto a wide range of biological surfaces, whether that’s a finger or a flower petal.

The method, developed by researchers from the University of Cambridge, takes its inspiration from spider silk, which can conform and stick to a range of surfaces. These ‘spider silks’ also incorporate bioelectronics, so that different sensing capabilities can be added to the ‘web’.

The fibres, at least 50 times smaller than a human hair, are so lightweight that the researchers printed them directly onto the fluffy seedhead of a dandelion without collapsing its structure. When printed on human skin, the fibre sensors conform to the skin and expose the sweat pores, so the wearer doesn’t detect their presence. Tests of the fibres printed onto a human finger suggest they could be used as continuous health monitors.

[…]

The researchers say their devices could be used in applications from health monitoring and virtual reality, to precision agriculture and environmental monitoring. In future, other functional materials could be incorporated into this fibre printing method, to build integrated fibre sensors for augmenting the living systems with display, computation, and energy conversion functions.

Source: Research | University of Cambridge

A learnt practice that placates idle hands and leaves our thoughts free

An asymmetrical triangular shaped stone tool with a sharp point and a mottled white, beige, and brown surface, displaying the skilled technique of ancient stone knapping.

One of the many things that I’ve learned from Laura is that people need things to do with their hands. That’s true with virtual workshops where emails are only a click away, but it’s also true in-person as well.

This post talks about this as an ‘ancient need’ which might explain stone knapping. The author, Matt Webb, suggests that we could lift this ‘evolutionary burden’ somewhat by, instead of teaching kids to better tolerate boredom, teach them a skill which involves using their hands. It’s not a bad idea, you know. I was giving kids blu-tack to fiddle with 15 years ago when I was a teacher, and not just the neurodivergent ones!

If I were to try something revolutionary, I mean truly revolutionary on a generational scale, here’s what it would be:

I would sneak a new fiddle urge fulfiller into the national school curriculum.

I wouldn’t plan on teaching kids how to tolerate boredom as they get older, or how to be more comfortable than previous generations inside their own heads. Those are unstable solutions.

I mean instead I would work to come up with something in the family of pen flipping or polyrhythm finger tapping or rolling a coin over the knuckles. Or I’d invent secular rosary beads or make child-safe whittling knives.

Something like that. Self-contained, not networked. Automatic, with room for skill, dextrous.

And I’d make sure this new skill was taught and drilled before these kids even have much conscious awareness, like right when they start pre-school, so it’s there for them throughout their lives.

Source: Matt Webb

Related: Museum of Stone Tools (which is where the image is from)

How we spend our days is, of course, how we spend our lives

A black briefcase on a teal background, topped with a whimsical miniature beach umbrella and chair setup.

Although things are pretty quiet at the moment, I usually average around 20 hours of paid work per week. When I tell them, people seem surprised at this, but when you strip away the pointless meetings, bureaucracy, and frustration that can come with regular employment, it’s entirely possible to (in normal times) earn a decent salary working half the number of hours.

This article shares the case of Josh Epperson, who works 10-15 hours per week and earns ~$100k/year after starting what he calls ‘The Experiment’. On a side note, I’m also sharing the image that comes with the article to comment how lazy it is: who uses a briefcase in 2024? Also, as the article mentions explicitly, Epperson is spending the time he’s not working doing community stuff, not lazing on the beach.

For me, I spend a lot of time on the side of football pitches and basketball courts. I’m almost always around for my kids, because I work from home and try and get most of my work done while they’re at school. This, to my mind, is the way it should be. Apart from school, but that’s a whole other post…

Epperson began The Experiment by scaling back the obligations on his time and money. He resigned from his role on the board of a Black film festival. He moved out of his swanky apartment to a cheaper part of Richmond and traded in his Land Rover for a Honda CR-V. Despite these downgrades, his new path came with advantages. Epperson prepared his meals and ate more healthily. He spent unhurried afternoons with friends in the garden. And he got into regular meditation and exercise.

Epperson also saw how the added time benefited his professional life. He started working on projects for the Smithsonian and an urban-farming nonprofit called Happily Natural. With more space around his work, his work got better. “In the old industrial model of employment, the more hours you put in, the more products come out,” he explained. But if the product is an idea for a marketing campaign or a headline for a website, Epperson found that there wasn’t a positive correlation between how many hours he put in and the quality of the output. With more room to seek inspiration and develop his ideas, Epperson was doing more work that made him proud.

What impressed me most from my time with Epperson is that he doesn’t treat leisure only as grist for the mill. He doesn’t unplug so that he can be more productive when he sits back down at his computer. Nor does he, like so many of us, exist in a perpetual state of half-work, swiping down at dinner to see if any new emails have come in.

For Epperson, reducing his working hours gives him the space to invest in other facets of his life. He is involved in his community. He is a generous friend. He takes care of his body. Walking the streets of Richmond with Epperson is like walking next to the mayor—he seemed to know every shopkeeper and skateboarder we passed.

Source: The Atlantic

Shark skin aircraft FTW

Two workers apply a shark skin-like coating, AeroSHARK, to an airplane’s red and white exterior, carefully working around a window with precision tools. We need all kinds of innovations, large and small, to help address the climate emergency. I'm not sure how much 2,200 tonnes of kerosene being saved means in the big scheme of the things, but the technology evidently works, and learning from nature is pretty cool.

The Swiss national airline has now incorporated the shark skin onto all of its long-haul 777 aircraft, with the final example to adorn the technology receiving it at the start of May… Last year, the airline saved nearly 2,200 tonnes of kerosene despite its 777 fleet not being fully fitted at the time.

Rather than being completely smooth, shark skin is unique in its ability to minimize drag through specific grooves, which, in aviation terms, allows for a smoother and more efficient flight.

AeroSHARK replicates this hydrodynamic property on aircraft. It is a “special film” made up of “tiny 50-micrometre riblets that reduce aerodynamic drag during flight.”

Source: Simple Flying

A Jazz-soaked Philosophy for our Catastrophic Times: From Socrates to Coltrane’

YouTube thumbnail for first lecture featuring Prof. Cornel West

These Gifford Lectures at the University of Edinbugh look pretty awesome. I don’t think Cornel West uses any slides, either, so perfect to rip to MP3 and listen to while I’m out for a run! Although I guess I’ll miss some of his very expressive gestures :)

Prof. Cornel West delivers the 2024 Gifford Lecture Series at the University of Edinburgh, titled ‘A Jazz-soaked Philosophy for our Catastrophic Times: From Socrates to Coltrane’.

Source: Bella Caledonia

Digital Badging Commission

Overhead view of four professionals collaborating around a wooden table with a laptop, smartphone, and notebook, in an office setting. The image is framed in a hexagon, reminiscent of badges.

I’m reserving judgement on this initiative until I find out more, but it seems to be in the same ball park as work done as part of initiatives in the US and Europe. Hopefully, it’s a UK-focused way of getting badges more mainstream, although I’m always a little wary when I see the word ‘microcredential’ as it’s a very supply-side term.

The RSA and Ufi VocTech Trust are leading work in this area. I’m hopefully talking with Rosie Clayton soon, who’s part of the team.

Building a movement towards greater understanding, development, and adoption of digital badges by accrediting bodies, policymakers, and employers, including other micro-credentialling providers outside of our mutual networks

Exploring the quality and interoperability of digital badges used by key awarding and accrediting organisations Making the case for a lifelong digital record of learning using digital badges and micro-credentials

Examining the feasibility of applying QA frameworks to digital badges so that they could be used to reward flexible learning pathways (e.g. in line with the lifelong learning entitlement)

Source: Digital Badging Commission

Food bank efficiency

People putting tins into a cardboard box

I’ve been at the Thinking Digital conference this week, where local guy Paul McMurray who works for Accenture, was on stage telling us about a website called Donation Genie.

He discovered that food banks often have to stop creating food parcels for hungry families because they’re missing certain items, and so he responded to a company challenge, won, and has continued to develop his creation to integrate with various APIs to improve efficiency, and thus help more people.

Food banks and warm spaces can update their wishlists detailing what they need.

Donation Genie compiles that information, so you know exactly what your community needs right now.

With Donation Genie, you can target your donation to make the biggest impact in your area.

Source: Donation Genie

The 'threat' of fictional and factual fembots

Screenshot from 'Metropolis'

Of all of the things that have launched recently, a breath of fresh air has been 404 Media. This is another article from there, which challenges us to think about recent news from OpenAI ushering a future that is less like the film Her and more like Metropolis.

In 1957, German director Fritz Lang introduced the world to the first on-screen fembot with his adaptation of his wife Thea von Harbou’s frenetic urban dystopia novel Metropolis. The character of the Maschinenmensch, a robot woman created by a mad scientist to replicate his dead lover (a deepfake, basically) hypnotizes the effete bourgeois with a dance. The men pant and pull their hair and scream, “For her, all seven deadly sins!”

Before Metropolis, automatons were seen as entertaining, odd tinkerings of inventors and the wealthy. Their history goes back to ancient Greece, through the Middle Ages and into the 18th century. Until that point, machine-men and women were fairly evenly represented (plus a lot of little robot animals). But in the 19th century, with the arrival of the industrial revolution, something changed. People became afraid of the progress happening around them, and feared mass unemployment thanks to these new factories and machines that separated workers from the products of their own labor.

That’s when depictions of the android as female started to take over. When machines started to be seen as a threat to male control, something to be feared and never to be fully understood, they were imagined as seductive pariahs, the original black box. The Maschinenmensch is burned at the stake.

“Fictional and factual fembots each reflect the same regressive male fantasies: sexual outlets and the promise of emotional validation and companionship,” researchers Kate Devlin and Olivia Belton wrote in their 2020 paper. “Underpinning this are masculine anxieties regarding powerful women, as well as the fear of technology exceeding our capacities and escaping our control.” Everyone is fixated on the flirtatious female voice because deep down, under the jokes about e-girls being “so over” and AI girlfriends as responsible for declining birth rates, people are actually, seriously afraid.

Source: 404 Media

An end to growth?

Sylized illustration of a snail in grass

Kate Raworth, who came up with the idea of Doughnut Economics, writes in The Guardian about how we need to move beyond the idea of endless growth. This goes beyond alternatives to GDP such as the Human Development Index (HDI) to take into account of environmental factors.

Instead of pursuing endless growth, it is time to pursue wellbeing for all people as part of a thriving world, with policymaking that is designed in the service of this goal. This results in a very different conception of progress: in the place of endless growth we seek a dynamic balance, one that aims to meet the essential needs of every person while protecting the life-supporting systems of our planetary home. And since we are the inheritors of economies that need to grow, whether or not they make us thrive, a critical challenge in high-income countries is to create economies that enable us to thrive, whether or not they grow.

[…]

When we turn away from growth as the goal, we can focus directly on asking what it would take to deliver social and ecological wellbeing, through an economy that is regenerative and distributive by design. There are many possibilities – such as driving a low-carbon, zero-waste industrial transformation, with a green jobs guarantee, alongside free public transport, personal carbon allowances, and progressive wealth taxes. Policies like these were, only a decade ago, considered too radical to be realistic. Today they look nothing less than essential.

Source: The Guardian

License to Drill

The older I get, the more different kinds of workouts (or drills) I need to do to keep supple and fit. This ‘James Bond’ workout could be useful, although I’m pretty sure the suit, fun, and martini are optional…

From the James Bond novels, we know that 007 liked to do all sorts of physical activities that could count as exercise: boxing, judo, swimming, and skiing. He was also a golfer, so he got some activity in that way.

[…]

In [From Russia With Love] (one of the 5 best books in the Bond canon), Fleming describes a short calisthenics routine that his secret agent does that’s capped off with a “James Bond shower.”

Source: The Art of Manliness

A series of exercises inspired by the James Bond novels

Navigating financial uncertainty isn't just about 'trying harder'

Humphrey Ker in Welcome to Wrexham, S2:E5 (A series of three photographs of a white man wearing glasses; he has short curly hair and a beard and is saying, 'It's a very British mentality at times of: everything's bad, don't expect better for yourself, just get on with it until you're dead')

When you’re a freelancer, consultant, or part of an organisation that relies on contracts or funding from third parties, you get used to financial peaks and troughs. This year, so far, though has been flat. Worryingly flat. I’ve never seen so many Open To Work badges on LinkedIn. I’ve even put up the bat signal.

In this post, Rachel Coldicutt shares some worrying news about organisations in the UK’s social sector — including her own. I don’t know what’s going on, to be honest. Putting on my tinfoil hat would suggest various conspiracy theories, whereas donning my systems thinking hat would suggest a confluence of factors including Brexit, pre-election concerns, experimentation with AI, etc.

Anyway, if you need some help at the intersection of learning, technology, and community, I’m here to help! My organisation, WAO has worked with organisations such as Greenpeace, MIT, and Sport England. We’ve got lots of openly-licensed resources which we can use for consulting or workshops, and we’ve also got experience in running impactful programmes. Let’s have a chat.

“It’s a very British mentality at times of: everything’s bad, don’t expect better for yourself, just get on with your life until you’re dead.”

Another “British mentality” is that most people don’t like to talk about money. Not having it is seen as a failure, asking for it is unimaginably crass. You’re just, somehow, supposed to have it.

[…]

This year should be one of pump priming and relationship building, a time of new beginnings and opportunities. The changes many of us want to see certainly won’t happen quickly, but if we don’t collectively make an effort to share ideas, tell stories, show what’s possible - well, they won’t happen at all. This is a good time to get ready, to rebuild networks and ideas, to make things happen and create the conditions needed for change.

It’s hard to imagine a better future and build alternatives when you’re worrying about the bills.

[…]

From conversations I’m having, it feels like we’ve collectively reached a pretty urgent financial impasse and if we don’t break it, many more organisations will find they have to close this year.

[…]

Usually, organisations like mine - who take on a mix of projects from the small (£15-30k) to the medium (£50-85k) to the large (£150k+) – would get a little financial bump in March and April. We’re just small enough to benefit from the flurry of year-end underspends, and big enough to take part in the proposals and procurement rounds that usually begin with the new financial year. By the end of April, any short- and medium-term gaps in the pipeline have usually been filled.

That hasn’t happened this year. And it’s not just us, everyone I speak with is experiencing the same thing.

Source: Just Enough Internet

Every drama requires a fool

A collection of illuminated red Chinese lanterns with tassels and inscriptions, closely clustered against a dark background, creating a luminous and festive display.

Some of these are perhaps a bit too literal for their own good, but this list of Chinese proverbs includes some I hadn’t seen/heard before. But then, I guess, lots of things are dubbed ‘Chinese proverbs’ that are of unknown provenance.

Here’s three of my favourites that I hadn’t come across before:

If you fall down by yourself, get up by yourself.

Every drama requires a fool.

Smart people also do stupid things.

Source: Futility Closet

Image: Henry & Co.

Another chance tonight

A black-and-white xkcd chart dividing various experiences into four quadrants based on whether they are exciting to see in person and whether they can be chased in a convoy of vehicles with coordination for optimal viewing. The items listed include natural wonders, tourist attractions, and everyday occurrences.

It’s looking like we might get another chance to see the aurora borealis tonight after the spectacular display in most places, including near us.

I thought this xkcd chart was funny/interesting given how underwhelmed I’ve been going to see things like the meridian line in Greenwich, London.

Source: xkcd

More on digital afterlife services

Three human skulls with neon pink, cyan, and yellow hues against a black background, reminiscent of an artistic, radiographic interpretation.

If there are people falling in love with chatbots powered by generative AI, you can bet there are people creating chatbots of deceased loved ones. I can understand the temptation, but I can’t see it being a particularly healthy one. And as this article points out, the vectors for disintformation and manipulation is immense.

Known as “deadbots” or “griefbots,” these AI clones are trained on data about the deceased. They then provide simulated interactions with virtual recreations of the departed.

This “postmortem presence” can social and psychological harm, according to researchers from Cambridge University.

Their new study highlights several risks. One involves the use of deadbots for advertising. By mimicking lost loved ones, deadbots could manipulate their vulnerable survivors into buying products.

Another concern addresses therapeutic applications. The researchers fear that these will create an “overwhelming emotional weight.” This could intensify the grieving process into endless virtual interactions.

Source: The Next Web

We got the internet that reflects who we are

Four square tiles, each depicting a guillotine, against a consistent light blue background.

John Willshire, with whom I had an interesting chat this week, asked a question of Mike Monteiro. I confess not to follow the latter’s work, but liked his thoughtful and considered answer to John’s question: Which internet do you wish we’d ended up with instead of the one we got?

This is the most pertinent part of it, as far as I’m concerned:

Ultimately, I think we got the internet that reflects who we are.

We got an internet where a select few people have a lot of control because they have money. We got an internet where loud angry racists demand a lot of attention because they believe they deserve that attention. But we also got an internet where kids manage to connect with one another. We got an internet where trans kids can get make-up tutorials. We got an internet where the horrors of a genocide can be exposed, as much as the powers-that-be try to stifle that from happening. We got an internet where Kendrick can go ham. We got an internet that reflects both the horror and the beauty of who we are as human beings.

We got the internet that reflects who we are.

Do I want a better internet? Sure. I mean, I’m on it right now. I’d enjoy it more if it wasn’t full of terfs and nazis. But the path to a better internet starts with building a better society. Which starts by redefining what we mean when we say we.

Source: Mike Monteiro’s Good News

ShareOpenly (to the Fediverse)

Screenshot of ShareOpenly plugin in action

Ben Werdmuller created ShareOpenly to make it easier to share web content, such as blog posts, to the newer crop of social media sites. Happily, he’s announced that there’s now a nifty icon and a WordPress plugin. I’ve installed the latter over at my personal blog.

In related news, publishing platform Ghost has started work on their ActivityPub integration. This is the protocol that allows social media sites such as Mastodon, Pixelfed, and Micro.blog (which powers this blog) to talk to one another.

ShareOpenlyIt’s been a little over a month since I launched ShareOpenly, my simple tool that lets you add a “share to social media” button to your website which is compatible with the fediverse, Bluesky, Threads, and all of today’s crop of social media sites.

You might recall that I built it in order to help people move away from their “share to Twitter” buttons that they’ve been hosting for years. Those buttons made sense from 2006-2022 — but not so much in a world where engagement on Twitter/X is falling, and a new world of social media is emerging.

Source: werd.io

Strategic Design resources

Circular diagram of the Design Council's Systemic Design Framework with intersecting red lines creating six segments labeled with phases of the design process: Connections and Relationships, Orientation and Vision Setting, Leadership and Storytelling, Continuing the Journey, Catalyse, Create, Reframe, and Explore.

I had a chat with John Willshire today from Smithery and while we were talking he mentioned a few resources and books:

Cullernose Point

A black and white artwork depicts a dynamic and textured wave, giving off an impressionist or woodblock print style, signed at the bottom by the artist.

I’ve got too much art waiting to go up in my new house to be buying more at the moment, but I’m very tempted by these exquisite wood engraving prints from emeritus professor John Altringham, an ecologist and conservation scientist.

In particular, the one I’ve featured for this post is of a place in my home county of Northumberland that I’ve never visited: Cullernose Point. It also has given me some ideas for some mountain expeditions, something that I really need in my life at the moment.

Source: John Altringham

Systems ambiguity and chaos

A young child is standing on a large, beige, geometrically-patterned bean bag, reminiscent of Buckminster Fuller's geodisic domes

Silvio Lorusso’s ‘intervention’ during Domus Academy’s roundtable is well worth a read. He talks about the need to, in some ways fight complexity as more of a cultural practice than a practical framework. How useful is it, he wonders, to point to something as ‘complex’? Is it mostly of help to professionals seeking to assert authority and control?

Disciplines are arbitrary compartments of knowledge: they strategically define their boundaries in order to demarcate some particular problems and solve them, or at least address them. Sociology, for example, was born in the early 19th century to address the problem (and, therefore, problems) of society. However, disciplines won’t meekly confine themselves within their artificial boundary; rather, their internal discourse will push the border, extending it. This tendency is especially evident in the design field, where you often hear that “everything is design”.

Besides the physiological swelling of the disciplines, we witness a phenomenon which is historically specific. Martin Oppenheimer (quoted by philosopher Donald Schön) called it a “proletarianization of the professions”. When everyone can call themselves a professional, the reputational and financial returns of being one shrink. Furthermore, there is a tangible distrust toward the figure of the expert. Just think of the field of economy or virology…

So, what do professionals do to regain prestige? They accelerate the expansion of the disciplinary confines, creating connections in an almost conspiratorial, apophenic mode. Carlo Bramanti, who is currently working on the notion of “conspiratorial design”, points out not only the visual but also the conceptual similarities between diagrams made by legitimized design figures like Victor Papanek and paranoid-style infographics about “Covid 5G” by an obscure graphic artist named Dylan Louis Monroe. What do they have a in common? They want to produce and project a sense of control on the messiness of the world (Richard Hofstadter: “the paranoid mentality is far more coherent than the real world, since it leaves no room for mistakes, failures, or ambiguities”). How do they do so? By means of hypertrophy: by adding always more relations to their system, which becomes a totalizing one: it becomes the system.

This is why complexity is ultimately a reassuring category. Reassuring to whom? To the professionals, who are there to explain and clarify it, to seal it with “the authoritative stain of scientific enquiry”, as Georgina Voss puts it. And there is a further paradox. Do you remember the Game of Thrones “It’s not that simple” meme? Well, to reassure themselves, experts will have an incentive to expand their system of reference, and therefore create more links and relationships. This leads to the ever-increasingly intricate diagrams, to what Voss calls “the airport-bookshop model of systems thinking which tends to involve a lot of graphs and urges to ‘shift your mindset’”. But by adding links and relationships one doesn’t necessarily reach galaxy brain level. More likely they will just generate more confusion, more noise, more chaos.

Source: Entreprecariat

Image: Bucky’s Nightmare by Mathieu Lehanneur.

But here we are: the diaspora of online communities

An empty lecture theatre

Laura often says that online communities don’t exist on a single platform, but all over the web. They might seem to have a ‘home’ in one place, but conversations and hashtags to gather around are distributed.

Twitter, says Alan Levine, was an anomaly in that regard. It felt like a ‘public square’ even though it was owned by a private company. As I said a decade ago, ‘software with shareholders’ is a problem. Something to avoid.

Now, I spend most of my social time on the Fediverse, but it’s not a place where I talk much about my professional work. That seems to have moved to LinkedIn, more from necessity than choice. It’s not a great state of affairs; I wish it were different. But here we are.

There is a myth. Cue the string section.

That there was once a place for all to gather, share, be festive, develop new connections, every course a hashtag, topple a few governments, people power.

Then came an evil billionaire who ruined it all, those who gathered were cast out, a diaspora.

No end.

Yes the Musky One was/is a horrible scourge, but all he did was hasten a decline. The birdhouse he bought was already uwinding in the mid 2010s with more algorithim cruft, more ads, more malfeasents. In my 2016 year end post, having been one of the world’s stupefied witness to the script of a reaility show election right out of the Black Mirror

[…]

It’s too simple, too convenient to scapegoat it all on that smelly tyrant, the reason for the dispersion is we dispersed. And it’s not a platform thing, it’s that thing in our hands.

Source: CogDogBlog

"All that any honest review actually does is just accelerate whatever was already going on"

This is a masterpiece in defending yourself while taking the high ground and explaining to your audience what it is that you actually do. Brownlee is an amazing communicator and there’s a reason why he’s got one of the most-watched YouTube channels.

Warm Data

Warm data napkin sketch

Amy Daniels-Moehle shared a link to this during the ORE community call yesterday. She’s been doing some kind of intensive course with the International Bateson Institute and mentioned the concept of ‘warm data’.

Nora Bateson explains the origin and need for warm data in this short video. It’s definitely something I need to explore further, perhaps as part of my studies towards a MSc in Systems Thinking.

The International Bateson Institute exists to generate and give access to information that offers a wider vision. The focus of inquiry is on the interrelational processes between and among systems. It can involve recognising how patterns repeat and reflect each other among multiple contexts and across multiple systems – many of these systems’maintenance and renewal is critical in the coming decades.

The underlying premise of the IBI is to address and experiment with how we perceive. Our work is to look in other ways so that we might find other species of information and new patterns of connection not visible though current methodologies.

We call this information “Warm Data”.

Source: The International Bateson Institute

Spectacular timelapse over the ALMA Observatory

This timelapse, which was shared by the social media account Wonder of Science is just fantastic. It’s a timelapse capturing an entire night from sunset to sunrise over the ALMA Observatory on the Chajnantor Plateau in the Chilean Atacama Desert by Christoph Malin.

More information on the European Southern Observatory website.

How to easily generate image descriptions and alt text

Screenshot of webapp, with image on the left, and generated text on the right

This is pretty great: you upload the image and it creates a pretty detailed text description, along with more concise alt text. I’ve previously been using GPT-4 for this, but this is more focused and useful.

Source: Arizona State University

It turns out the apple can fall pretty far from the tree

Father teaching his son to skateboard

How much influence do parents have on their children? Less than we’d assume, it appears.

I have to say, much to my shame, that it’s taken me a long time to realise how different my own kids are to me. This shouldn’t be a surprise, as I’m quite different from my own parents. Although, of course, there are some pretty huge overlaps in our interests, but that’s to be expected, given how much time we’ve spent together.

[P]sychologists have known for ages that parents and children don’t particularly share the “big five” personality traits (extroversion, agreeableness, openness, conscientiousness and neuroticism). It is getting attention now because of a study that tried to change the way this question of family similarity is explored. Rather than participants only self-reporting their personality traits, they also chose a third party who knew them well to assess them.

This novel approach suggested more similarity between parents and offspring – approximately 40% rather than the 25% of previous studies. But that is still very low. The study concluded that it was “impossible to accurately predict a child’s personality traits from those of their mother or father” and that most relatives are not “much more similar than strangers”.

Huh. So it’s just our pattern-seeking brains that make us think little Timmy is “cheeky like his dad”; you might as well say he is “cheeky like that gull”. As a parent, this felt like a weight lifted: if my kids are like me (God help them), it’s not my fault – just dumb luck. The same study’s findings on the impact of home environment felt good, too: “Growing up together does not make people more similar.”

[…]

But parents aren’t entirely off the hook. Last year, a study of 9,400 11- to 17-year-olds declared: “Parent personalities have a significant impact on a child’s life.” The detailed results concluded: “Kids with neurotic parents scored relatively low on several measures, including grades, overall health, body mass index … and time spent on leisure activities.” (Sorry, kids, but it’s not just me and my fellow neurotics getting guilted: extroverts’ offspring also get worse grades.)

It would be bizarre if the people who raised us had no influence on how we turned out, but surely we will never understand with any clarity how our parents screw us up and how we screw up our kids in turn. There are too many variables; how could you ever work out what makes us who we are, what is innate and what isn’t? As one psychologist has put it, the most direct way to weigh nature against nurture is “to randomly assign children to parents”.

Source: The Guardian

Limiting virtues

A warmly lit café front with large windows on a traditional two-story building, featuring a clean, modern design and the name 'faro' above the windows. The exterior is painted white with a brick pavement, and it's twilight outside.

Sara Hendren discusses what I would call ‘creative constraints’ as applied to organisational policies. For example, a coffee shop she knows that has a no-laptops policy which is “gently, but strictly, enforced” changes the vibe:

A no-laptops policy means you can’t get a certain kind of work done, but it does mean everyone present will be a little more eyes-up-and-talking, or maybe absorbed by a book or notebook. The activities will be at the speed of the body, one to another. Is it nostalgic and precious? Maybe. But it’s not the only café in town to make this move, and I think there’s some signal there. Faro started out with no-laptops only on weekends, and the policy was welcome enough to make it a daily norm. Over at Zuzu’s Petals, it’s no devices of any kind.

Source: undefended / undefeated

You are what you read

A piece of textured white paper with rough edges taped onto a soft peach background.

Jim Neilsen uses a Ralph Waldo Emerson quotation as a jumping-off point to discuss something important:

I cannot remember the books I’ve read any more than the meals I have eaten; even so, they have made me.

To me, all of those people with super-intricate systems, whether for academic work, pleasure, or something inbetween are faintly ridiculous. The point isn’t to replicate a machine and remember everything you’ve ever read.

Neilsen writes:

It’s a good reminder to be mindful of my content diet — you are what you eat read, even if you don’t always remember it.

For me, I bookmark a bunch of stuff that I never get to reading properly. Some of it ends up here on Thought Shrapnel in a way that I can process and search through at a later date. The added bonus is that you, dear reader, also get to see it too.

Source: Jim Neilsen’s Blog

Image: Olga Thelavart

Book reading and secondary orality

A stylized illustration of an open book made to look like an open laptop, surrounded by several closed books and a pen, all set against a vibrant red background.

This is a bit of a strange ramble-post which largely rehashes the discussion/debate we’ve been having for over 15 years about the qualitative difference between reading on screens versus reading on paper. The difference is that there is the added layer of moral panic about people reading fewer books.

What is rarely included in this kind of thing is that we are emerging from what Walter Ong called the Gutenberg Parenthesis, a time in which the written word dominated. That wasn’t true before the printing press, and it’s unlikely to be true going forward given the preponderance of new media. This post-Gutenberg phase is known as ‘secondary orality’ as it depends upon literate culture.

I will always prefer the written word, mainly because it’s more information dense than video and audio content. But there’s room for everything without endless hand-wringing.

The e-reading apps have their merits. At times, they become respites from the other, more addictive apps on my phone. Switching to a book from, say, Twitter, is like the phone-scroller’s version of a nice hike—the senses reorient themselves, and you feel more alert and vigorous, because you’ve spent six to eight minutes going from seven to eleven per cent of Arthur Koestler’s “Darkness at Noon.” Or you might feel a sense of pride because you’ve reached the sixty-per-cent mark in Elton John’s autobiography, “Me,” which isn’t a great work of literature but at least is better than Twitter. The book apps also seem to work as a stopgap for children, who are always lusting after screen time of any sort. My seven-year-old daughter has read hundreds of books on the Libby app, which lets you check out e-books from public libraries you belong to. As a parent, I find this wildly preferable to hearing the din of yet another stupid YouTube short or “Is it Cake?” episode coming through her iPad’s speakers.

Still, the arrival of these technologies has been accompanied by a steady decline in the number of books that get read in any form. A pair of 1999 Gallup polls, for example, found that Americans, on average, had read 18.5 books in the course of the previous twelve months. (It should be noted that these were books people had read, or said they had read, “either all or part of the way through.”) By 2021, the number had fallen to 12.6. In 2023, a National Endowment for the Arts survey found that the share of American adults who read novels or short stories had declined from 45.2 per cent in 2012 to 37.6 per cent in 2022, a record low. There are plenty of theories about why this is happening, involving broad finger-pointing toward the Internet or the ongoing influence of television, or even shifting labor conditions, as more women have entered the workforce.

Source: The New Yorker

Optimising for the wrong things

A creative workspace with watercolor paintings featuring yellow and green floral motifs, paint cubes, brushes, and earphones on a rustic wooden table. Amongst these items are a potted plant, a bouquet of yellow wildflowers, and a pair of sunglasses, all suggesting an artist's break in progress.

Solid advice here.

You aren’t famous. Anything you do or create will probably receive little to no attention, so stop optimizing for a non-existent audience and instead focus on what makes you enjoy the activity.

[…]

The most egregious thing you can do with any activity is daydream about how you can make money off of it. That’s the quickest way to optimize for the wrong things and suck the fun right out of it. Most likely you will stop doing the activity almost immediately, so save the money-making schemes for work.

In the end, find something you enjoy doing and just do it because you enjoy it. If you have to, make some goals for yourself, but never for your “audience”.

Source: Ash Newman

Image: Elena Mozhvilo

There's only so much lemonade you can make when life is firing lemons at you

I just had to post this image, which I discovered via the Fediverse. It’s definitely a riposte to all of those people who say that people who have been underserved and marginalised by a biased system should “try harder,” “be more resilient,” or “show more grit”.

A person sitting at a table making lemonade with a manual juicer, surrounded by piles of lemons and filled bottles. Above, a showerhead pours more lemons onto the overwhelmed individual and the table, exaggerating the phrase "when life gives you lemons, make lemonade." Artist: Will Santino.

Real-time deepfake videos for fun and exploitation

Montage of phone in front of someone's face and ring light in background

This is a PSA to be careful out there: deepfakes have come to regular, real-time video calls. People are getting scammed.

The Yahoo Boys have been experimenting with deepfake video clips for around two years and shifted to more real-time deepfake video calls over the last year, says David Maimon, a professor at Georgia State University and the head of fraud insights at identity verification firm SentiLink. Maimon has monitored the Yahoo Boys on Telegram for more than four years and shared dozens of videos with WIRED revealing how the scammers are using deepfakes.

[…]

The Yahoo Boys’ live deepfake calls run in two different ways. In the first, shown above, the scammers use a setup of two phones and a face-swapping app. The scammer holds the phone they are calling their victim with—they’re mostly seen using Zoom, Maimon says, but it can work on any platform—and uses its rear camera to record the screen of a second phone. This second phone has its camera pointing at the scammer’s face and is running a face-swapping app. They often place the two phones on stands to ensure they don’t move and use ring lights to improve conditions for a real-time face-swap, the videos show.

The second common tactic… uses a laptop instead of a phone. (WIRED has blurred real faces in both videos.) Here, the scammer uses a webcam to capture their face and software running on the laptop changes their appearance. Videos of the setup show scammers are able to see their own face alongside the altered deepfake, with just the manipulated image being displayed over the live video call.

[…]

Some of the Yahoo Boy videos are unbelievable, obvious fakes, while others appear plausible. When they’re viewed live, on a mobile phone, with unstable connections, any obvious flaws may be masked—especially if a scammer has spent months social-engineering their victim.

[…]

Ronnie Tokazowski, the chief fraud fighter at Intelligence for Good, which works with cybercrime victims, says because the Yahoo Boys have used deepfakes for romance scams, they’ll pivot to using the technology for their other scams. “This is kind of an early warning where it’s like: ‘OK, they’re really good at doing these things. Now, what’s the next thing they’re going to do?’”

Source: WIRED

It's not sick note culture, it's systemic failure in governance

Chart showing number of people waiting for treatment in the UK. Number have risten sharply over the last decade or so.

This is, as you’d expect, a restrained article from the BBC. But it still flies in the face of the government’s talk of a ‘sick note culture’ in the UK. Instead, as anyone lives here will attest, it’s the financial crisis, Brexit, and the pandemic, compounded by repeated government failure — including underfunding the NHS.

Research by the Health Foundation shows there are as many people aged 16 to 64 in work whose health limits what they can do as they are out of work because of ill-health.

Overall, it estimates nearly a fifth of the working-age population in the UK has what it calls a work-limiting condition.

In fact, the think tank believes the problem has become so bad that it is threatening the economic potential of the country.

[…]

So why are working-age people so ill? Christopher Rocks, who heads up the Health Foundation’s work in this area, says it is a “complicated” picture.

He says while there has been a lot of focus on the issue since the pandemic, the trend has actually been developing for the past decade at least.

“The 2008 financial crisis had a major impact on society - we saw an economic downturn and public spending cuts. That had an impact on people’s health in many different ways. The pandemic and subsequent cost of living crisis exacerbated trends, but the signs were there before Covid hit.

“Access to health care has become more difficult, while those fundamental building blocks of health - such as good housing and adequate incomes - are under strain.”

How that has affected people varies depending on their age and where they live. Research published this week warned the numbers with major illness was set to increase significantly. with the people in the most deprived areas suffering the most - many with multiple conditions.

The work, also published by the Health Foundation, found there were three main conditions causing a significant burden of ill-health: chronic pain, type 2 diabetes and mental health problems. Each is a reflection of the different challenges facing the country.

Source: BBC News

Book publishing doesn't work

Books on their side

Elle Griffin digged into the details of a court case from 2022 that involved Penguin Random House attempting to acquire another publishing house, Simon & Schuster. Some of the details shared are eye-opening.

I don’t think the models used by the book industry, or the academic publishing industry, are long for this world.

I think I can sum up what I’ve learned like this: The Big Five publishing houses spend most of their money on book advances for big celebrities like Brittany Spears and franchise authors like James Patterson and this is the bulk of their business. They also sell a lot of Bibles, repeat best sellers like Lord of the Rings, and children’s books like The Very Hungry Caterpillar. These two market categories (celebrity books and repeat bestsellers from the backlist) make up the entirety of the publishing industry and even fund their vanity project: publishing all the rest of the books we think about when we think about book publishing (which make no money at all and typically sell less than 1,000 copies).

[…]

The DOJ’s lawyer collected data on 58,000 titles published in a year and discovered that 90 percent of them sold fewer than 2,000 copies and 50 percent sold less than a dozen copies.

[…]

Having a lot of social media followers or fame doesn’t guarantee it will sell. The singer Billie Eilish, despite her 97 million Instagram followers and 6 million Twitter followers, sold only 64,000 copies within eight months of publishing her book. The singer Justin Timberlake sold only 100,000 copies in the three years after he published his book. Snoop Dog’s cookbook saw a boost during the pandemic, but he still only sold 205,000 copies in 2020.

[…]

The publishing houses may live to see another day, but I don’t think their model is long for this world. Unless you are a celebrity or franchise author, the publishing model won’t provide a whole lot more than a tiny advance and a dozen readers. If you are a celebrity, you’ll still have a much bigger reach on Instagram than you will with your book!

Personally, I could not be more grateful to skip the publishing houses altogether and write directly for my readers here, being supported by those who read this newsletter rather than by a publishing advance that won’t ultimately translate to people reading my work.

Source: The Elysian

Image: Tom Hermans

Social media without an audience

The view from inside an ice cave, looking out at a starry night sky.

What I appreciate about Drew Austin’s writing is how concisely he can string together important points. Go and read the three long paragraphs of this post, which I’ve summarised out of order below.

My understanding is that Austin is saying that our mental model of social media is out of kilter with the current reality. We’re pretending that the current landscape is in any way similar to that of a decade ago.

[A] 2021 essay, The Brazilianization of the World by Alex Hochuli, describes how “the fate of being modern but not modern enough now seems to be shared by large parts of the world: WhatsApp and favelas, e-commerce and open sewers.” As a small cohort of venal elites separates itself, physically and socially, from the much larger and poorer population in which it’s embedded, it creates an idea of interior and exterior existence. The Twitch streamer with no audience anticipates life on the outside, in the dead public space of a Brazilianized, enclave-gated internet, a ground that shifted under our feet with little warning, turning us into street buskers playing music we didn’t realize no one could hear.

[…]

Talking to no one is the near future of social media, the digital equivalent of warming your hands over an oil drum bonfire in an abandoned city—what you do when you missed the last bus out of town and have to loiter as comfortably as possible in the ruins. We may have once imagined that social media would ultimately end by imploding suddenly, releasing us from the last day of school into a summer of the real, but no such catharsis is coming. When institutions die now, they rarely give us the closure of ceasing to exist—they live on in zombie form, and we learn to tolerate the gradually worsening conditions they impose. We stick around Twitter because we need to for professional reasons, we may tell ourselves, but the real job is just scavenging copper wires from the wreckage.

Source: Kneeling Bus

Image: Patrick Busslinger

How not to mince about like a little weasel

Russ Cook running in Africa

It would be remiss of me not to mark the extraordinary achievement of Russell “Hardest Geezer” Cook, who has run the entire length of Africa. This interactive map not only charts the daily progress he made, but links to his social media accounts.

My favourite part of the story, which backs up his nickname, comes when he had scans due to persistent back pain. Finding no bone damage, he concluded that “the only option left was to stop mincing about like a little weasel, get the strongest painkillers available and zombie stomp road again”.

Incredible.

The 27-year-old from Worthing, West Sussex, said he had struggled with his mental health, gambling and drinking, and wanted to “make a difference”.

After running through 16 countries, he has raised in excess of £700,000 for charity and has completed his final run.

As he crossed the finish line at about 16:40 BST in Ras Angela, Tunisia, Mr Cook was greeted by a shouting crowd, with many chanting “geezer”.

“I’m pretty tired,” he told reporters and in a post on X, formerly known as Twitter.

Source: BBC News

Tearing your anger into strips

Self-reported anger during Experiment 1 (left) and Experiment 2 (right). Significant differences emerged at the end of time due to experimental manipulations. Possible values for anger range from 1 to 6. Each vertical line illustrates the 95% confidence intervals for each group.

A new paper in Nature suggests that writing down your feelings of anger and then disposing of the piece of paper can rid yourself of the angry feelings. Interestingly, or tellingly, the paper starts by talking about parental anger and the importance of demonstrating emotional self-regulation.

I’ve done something similar in terms of emotional processing with my own kids. For example, when my son was around four years old, the bird hide in the park behind our house was set on fire deliberately. An act of arson. He was inconsolable, and had nightmares. I got him to draw a picture of what had happened and to use it to talk about what happened, which seemed to be cathartic.

Anger suppression is important in our daily life, as its failure can sometimes lead to the breaking down of relationships in families. Thus, effective strategies to suppress or neutralise anger have been examined. This study shows that physical disposal of a piece of paper containing one’s written thoughts on the cause of a provocative event neutralises anger, while holding the paper did not. In this study, participants wrote brief opinions about social problems and received a handwritten, insulting comment consisting of low evaluations about their composition from a confederate. Then, the participants wrote the cause and their thoughts about the provocative event. Half of the participants (disposal group) disposed of the paper in the trash can (Experiment 1) or in the shredder (Experiment 2), while the other half (retention group) kept it in a file on the desk. All the participants showed an increased subjective rating of anger after receiving the insulting feedback. However, the subjective anger for the disposal group decreased as low as the baseline period, while that of the retention group was still higher than that in the baseline period in both experiments. We propose this method as a powerful and simple way to eliminate anger.

Source: Nature

If you're going to go, you might as well go... weirdly?

Illustration of the death of Aeschylus in the 15th century Florentine Picture Chronicle by Maso Finiguerra. Original is in the British Museum.

I stumbled across a Wikipedia page entitled ‘List of unusual deaths’. I was only going to share three of them, but there are so many bizarre ones on there that I couldn’t help sharing more.

Sigurd the Mighty of Orkney (892 CE): The second Earl of Orkney strapped the head of his defeated foe Máel Brigte to his horse’s saddle. Brigte’s teeth rubbed against Sigurd’s leg as he rode, causing a fatal infection, according to the Old Norse Heimskringla and Orkneyinga sagas.

Hans Staininger (1567): The burgomaster of Braunau (then Bavaria, now Austria), died when he broke his neck by tripping over his own beard. The beard, which was 4.5 feet (1.4 m) long at the time, was usually kept rolled up in a leather pouch.

Thomas Otway (1685): The English dramatist fell on hard times and was suffering from poverty in his later years, and was driven by starvation to beg for food. A gentleman who recognized him gave him a guinea, with which he hastened to a baker’s shop, purchased a roll, and choked to death on the first mouthful.

John Cummings (1809): After seeing a circus knife-swallower, seaman John Cummings began actually swallowing knives. On one occasion, he swallowed four knives, and quickly passed three with no ill-health. He later swallowed 14 knives, and after some days with abdominal pain, he passed all of them. He finally swallowed 20 knives and a clasp knife case, but after a few days, he had only passed the case; he died after four years in pain. On autopsy, a knife blade and spring were found in his intestines, and between 30 and 40 fragments of metal, wood, and horn in his stomach.

Mathilda of Austria (1867): The daughter of Archduke Albrecht, Duke of Teschen set her dress on fire while trying to hide a cigarette from her father, who had forbidden her to smoke.

Sir William Payne-Gallwey, 2nd Baronet (1881): The former British MP died after sustaining severe internal injuries when he fell on a turnip while hunting.

Thornton Jones (1924): The lawyer from Bangor, Gwynedd, Wales, woke up to find that he had his throat slit. Motioning for a paper and pencil, he wrote, “I dreamt that I had done it. I awoke to find it true”, and died 80 minutes later. He had done it himself while unconscious. An inquest at Bangor delivered a verdict of “suicide while temporarily insane”.

Isadora Duncan (1927): The American dancer broke her neck in Nice, France when her long scarf became entangled in the open-spoked wheel and rear axle of the Amilcar CGSS automobile in which she was riding.

David Grundman (1982): While shooting at cacti with his shotgun near Lake Pleasant Regional Park, Arizona, he was crushed when a 4-foot (1.2 m) limb detached and fell on him.

Vladimir Likhonos (2009): The 25-year-old student of Kyiv Polytechnic Institute from Konotop was killed when his chewing gum exploded. He had a habit of dipping his chewing gum in citric acid to increase the gum’s sour taste. On his work table police found about 100 grams (3.5 oz) of unidentified explosive powder which he used for chemistry studies at home. It resembled citric acid, and it is thought that he confused the two, having accidentally coated his gum in the explosive powder before chewing it. The explosive was found to be four times stronger than TNT, and the explosion was possibly triggered either by reacting with Likhonos’s saliva, or the pressure exerted by him chewing on the gum and explosive powder.

Ilda Vitor Maciel (2012): The 88-year old died in a hospital in Barra Mansa, Rio de Janeiro, allegedly as a result of nursing technicians injecting soup through her intravenous drip instead of her feeding tube.

Sam Ballard (2018): The 29-year old from Sydney, Australia, died from angiostrongyliasis after eating a garden slug as a dare eight years earlier.

Shivdayal Sharma (2023): The 82-year-old was reportedly urinating next to a train track in the region of Alwar, India, when a cow was hit by the Vande Bharat express train. The animal was launched 100 feet (30 m) into the air before landing on Sharma, killing him instantly.

Source: Wikipedia

Image: The death of Aeschylus, killed by a turtle dropped onto his head by a falcon

A pharmacology of digital tools

A silhouette of a person taking a photo of a city skyline against a vibrant red sunset reflected in the water.

This article in Aeon is the first time I’ve come across the French philosopher Bernard Stiegler who owned a jazz club which was shut down for illegal prostitution, and developed his philosophy of ‘technics’ while in prison for armed robbery.

Stiegler saw technics as the foundation of human existence, influencing our future possibilities and our sense of being. His view was that acknowledging the role of technology is essential to understand our reality and imagine alternative futures. He believed that while technology has the potential to standardise and limit our experiences, it also offers the ability to reshape human identity and cultural practices positively.

One to explore further, especially in terms of his idea of a pharmacology of digital tools.

In the late 20th century, Stiegler began applying [his ideas] to new media technologies, such as television, which led to the development of a concept he called pharmacology – an idea that suggests we don’t simply ‘use’ our digital tools. Instead, they enter and pharmacologically change us, like medicinal drugs. Today, we can take this analogy even further. The internet presents us with a massive archive of formatted, readily accessible information. Sites such as Wikipedia contain terabytes of knowledge, accumulated and passed down over millennia. At the same time, this exchange of unprecedented amounts of information enables the dissemination of an unprecedented amount of misinformation, conspiracy theories, and other harmful content. The digital is both a poison and a cure, as Derrida would say.

This kind of polyvalence led Stiegler to think more deliberately about technics rather than technology. For Stiegler, there are inherent risks in thinking in terms of the latter: the more ubiquitous that digital technologies become in our lives, the easier it is to forget that these tools are social products that have been constructed by our fellow humans. How we consume music, the paths we take to get from point A to point B, how we share ourselves with others, all of these aspects of daily life have been reshaped by new technologies and the humans that produce them. Yet we rarely stop to reflect on what this means for us. Stiegler believed this act of forgetting creates a deep crisis for all facets of human experience. By forgetting, we lose our all-important capacity to imagine alternative ways of living. The future appears limited, even predetermined, by new technology.

[…]

The pharmacology of technics, for Stiegler, presents opportunities for positive or negative relationships with tools. ‘But where the danger lies,’ writes the poet Friedrich Hölderlin in a quote Stiegler often turned to, ‘also grows the saving power.’ While Derrida focuses on the ability of the written word to subvert the sovereignty of the individual subject, Stiegler widens this understanding of pharmacology to include a variety of media and technologies. Not just writing, but factories, server farms and even psychotropic drugs possess the pharmacological capacity to poison or cure our world and, crucially, our understanding of it. Technological development can destroy our sense of ourselves as rational, coherent subjects, leading to widespread suffering and destruction. But tools can also provide us with a new sense of what it means to be human, leading to new modes of expression and cultural practices.

[…]

Technical innovations are never without political and social implications for Stiegler. The phonograph, for example, may have standardised classical musical performances after its invention in the late 1800s, but it also contributed to the development of jazz, a genre that was popular among musicians who were barred from accessing the elite world of classical music. Thanks to the gramophone, Black musicians such as the pianist and composer Duke Ellington were able to learn their instruments by ear, without first learning to read musical notation. The phonograph’s industrialisation of musical performance paradoxically led to the free-flowing improvisation of jazz performers.

Source: [Aeon](aeon.co/essays/be…

Disinformation is free

An illustration of two large, stylized cats in the foreground overlooking a dense, futuristic cityscape bathed in shades of red and blue.

This is an interesting post by Ian Betteridge, mainly because of the point he makes at the end about disinformation leading to a retreat behind paywalls. I think it’s inevitable that any open/social space without a governance model that specifically focuses on high-quality moderation (rather than increasing ‘shareholder value’) will have problems.

The answer is retreating to people we know and trust, but this doesn’t have to be in dark forests. We can use decentralised moderation models such as Bluesky and others are developing. My main concern is that we reach peak disinformation during an important election year before these mitigating technologies come to fruition.

“Grey goo” was a concept which emerged when nanotechnology was the hot new thing. First put forward by Eric Drexler in his 1986 book The Engines of Creation, this is the idea that self-replicating nanobots could go out of control and consume all the resources on Earth, turning everything into a grey mass of nanomachines.

Few people worry about a nanotech apocalypse now, but arguably we should be worried about AI having a very similar effect on the internet.

[…]

It is obvious that anywhere content can be created will ultimately be flooded with AI-generated words and pictures. And the pace of this could accelerate over the coming years, as the tools to use LLMs programmatically become more complex.

[…]

This is the AI Grey Goo scenario: an internet choked with low-quality content, which never improves, where it is almost impossible to locate public reliable sources for information because the tools we have been able to rely on in the past – Google, social media – can never keep up with the scale of new content being created. Where the volume of content created overwhelms human or algorithmic abilities to sift through it quickly and find high-quality stuff.

[…]

With reliable information locked behind paywalls, anyone unwilling or unable to pay will be faced with picking through a rubbish heap of disinformation, scams, and low-quality nonsense.

In 2022, talking about the retreat behind paywalls, Jeff Jarvis asked “when disinformation is free, how can we restrict quality information to the privileged who choose to afford it?” If the AI-driven information grey goo scenario comes to pass, things would be much, much worse.

Source: [Ian Betteridge](ianbetteridge.com/2024/01/2…

Slouchers rejoice!

Flowers with more or less drooping heads

My maternal grandmother was so paranoid about having a poor posture that she put a bamboo pole behind her back, and rested her wrists over the top of each side. The idea was to keep a straight back into old age. She did well to worry, as her sister, my Great Aunt, had osteoporosis. Of course, no amount of posture-correcting exercise is going to help you if your bones start crumbling.

This article in TIME is interesting in that it problematising the history of the posture-correcting ‘industry’ (for want of a better term). TL;DR: there’s no single, correct posture. Thank goodness for that.

By the mid 20th century, poor posture came to be seen as the culprit for rising rates of low back pain, even though little hard evidence existed to prove such claims of causality. President John F. Kennedy, who had repeated back surgeries and chronic pain, hired his own personal posture guru, Hans Kraus, a man who would go on to create one of the most well-known posture and fitness tests administered to hundreds of thousands public school children throughout the Cold War. It was in this cultural and political context of containment that uprightness became a symbol of patriotism, heterosexual propriety, and individualist strength, all virtues believed to be needed in order to defeat the threat of communism.

[…]

On the face of it, posture improvement campaigns may seem rather innocuous. What is the harm, after all, of engaging in posture exercise programs? Of buying chairs, shoes, and devices that help to encourage it?

On an individual level, it is entirely possible that an enhanced sense of wellness can come from taking up yoga or purchasing an ergonomic chair. But when looking at the long history of posture improvement campaigns from an historical and structural standpoint, it becomes evident how value-laden they are, and how they can perpetuate sexism, ableism, and racism.

[…]

A recent study published by physical therapists working in Qatar, Australia, Ireland, and the United Kingdom speaks to the urgent need of the profession to dispel the medicalized myth that poor posture leads to bad health. “People come in different shapes and sizes,” they write, “with natural variation in spinal curvatures.”

In short, there is no single, correct posture. Nor does posture correction necessarily ensure future health. Maybe it’s ok to slouch from time to time, after all.

Source: TIME

'Neom' is the sound of contractors being laid off

Part of the proposed Neom development

About 15 years ago, local residents where we used to live conned into backing (or at least not opposing) wind turbines being installed close to a residential area. The marketing materials included details of a proposed five-star hotel and golf course, which the developers said would help with tourism. Only the wind turbines were built, the developer “going bust” afterwards. I couldn’t stand the noise of the turbines, and it’s one of the reasons we moved.

It seems a similar kind of bait-and-switch is happening with the much-hyped Neom project in Saudi Arabia, a plan which I thought looked pretty dystopian in the launch video. I may be cynical, but perhaps they never intended to built it all? Perhaps it was meant to deflect attention away from their petro-checmical ambitions, sportswashing, and human rights abuses? It is a huge surprise to me that they built the luxury tourist destination part first. Huge.

Saudi Arabia has scaled back its medium-term ambitions for the desert development of Neom, the biggest project within Crown Prince Mohammed bin Salman’s plans for diversifying the oil-dependent economy, according to people familiar with the matter.

By 2030, the government at one point hoped to have 1.5 million residents living in The Line, a sprawling, futuristic city it plans to contain within a pair of mirror-clad skyscrapers. Now, officials expect the development will house fewer than 300,000 residents by that time, according to a person familiar with the matter.

Officials have long said The Line would be built in stages and they expect it to ultimately cover a 170-kilometer stretch of desert along the coast. With the latest pullback, though, officials expect to have just 2.4 kilometers of the project completed by 2030, the person familiar with the matter said, who asked not to be named discussing non-public information.

Source: Bloomberg

A tiny oasis of life, surrounded by an immensity of death

A family looking at an 'AI Nature Simulator' screen while the real environment outside the simulator is barren and desolate.

This cartoon is exactly the scenario I’m concerned about happening to the planet we call home. I’m going to juxtapose it with a quotation from William Shatner, the actor best known for his role in Star Trek and who finally got to go to space in 2021

Last year, at the age of 90, I had a life-changing experience. I went to space, after decades of playing a science-fiction character who was exploring the universe and building connections with many diverse life forms and cultures. I thought I would experience a similar feeling: a feeling of deep connection with the immensity around us, a deep call for endless exploration. A call to indeed boldly go where no one had gone before.

I was absolutely wrong. As I explained in my latest book, what I felt was totally different. I knew that many before me had experienced a greater sense of care while contemplating our planet from above, because they were struck by the apparent fragility of this suspended blue marble. I felt that too. But the strongest feeling, dominating everything else by far, was the deepest grief that I had ever experienced.

While I was looking away from Earth, and turned towards the rest of the universe, I didn’t feel connection; I didn’t feel attraction. What I understood, in the clearest possible way, was that we were living on a tiny oasis of life, surrounded by an immensity of death. I didn’t see infinite possibilities of worlds to explore, of adventures to have, or living creatures to connect with. I saw the deepest darkness I could have ever imagined, contrasting starkly with the welcoming warmth of our nurturing home planet.

This was an immensely powerful awakening for me. It filled me with sadness. I realised that we had spent decades, if not centuries, being obsessed with looking away, with looking outside. I played my part in popularising the idea that space was the final frontier. But I had to get to space to understand that Earth is, and will remain, our only home. And that we have been ravaging it, relentlessly, making it uninhabitable.

Image: Tjeerd Royaards

Text: The Guardian

Itano Circus

Gif showing Itano Circus animation style

I can’t remember where I came across it, but I’ve bookmarked both a YouTube video and the Wikipedia article for the signature style of Japanese animator Ichirō Itano.

Not only is it awesome in its own right, I think I’m correct in saying that the person who mentioned it was using it as a metaphor for attacking something from multiple angles. Which is also fantastic.

Itano is best known among anime fans for a style of action scene that he developed, usually nicknamed “Itano Circus” (板野サーカス, Itano sākasu) or “Macross missile massacre” by fans; it refers to a highly stylized and acrobatic method of depicting aerial combat and dogfights in many anime, particularly the Macross series.

The battle scenes of the conventional mecha animation took the style of “duel” using guns and swords such as Western (genre) and Jidaigeki and there were many staging which emphasized the heaviness and posing (decision pose) of the robot. A good example of this is sword fight in battle scenes such as Gundam. He created new scenes with the acrobatic moves.

Source: Ichirō Itano

Gif from video: “It’s The Circus”

Football fan hierarchy

Football (soccer) net

It remains a source of frustration to me that my kids support Liverpool. They’ve never been to a home game, and (I suspect) only chose them because they’re a good team in the Premier League, my wife being born there gave them an excuse, and my team (Sunderland) were doing terribly during their early years.

Of course, you can support whatever football team you like. But, as my mate Adam bangs on about all of the time, big money in football has corrupted the game.

[T]here are two concurrent developments taking place here. The first is the gradual realignment of fan hierarchy along the lines of one’s ability to pay: a development years in the making but now reaching a kind of tipping point amid rising prices and declining living standards. In his typically empathic and erudite way, Postecoglou was countering an argument that didn’t really exist. Nobody is discussing restricting access to foreign fans, who have always been able to rock up and buy a ticket. But in romanticising the devotion of the wealthy, amid price hikes that have enraged longstanding Spurs fans, Postecoglou offered up a justification that Daniel Levy and the corporate press office could scarcely have scripted more perfectly.

The other is the gradual erosion of the big club fanbase as a place of congregation and common ground. Broadening a fanbase also weakens it, weakens the ties that bind fans to each other, weakens their inclination to unite and organise. The mass of (mostly domestic-based) Manchester United fans resisting the sale of their club to a Qatari bidder were met by an equal and opposite wave of (mostly foreign-based) fans backing the Qatari bid. Meanwhile, how can we expect Chelsea fans to resist a future Super League if they can no longer even agree on whether Armando Broja is any good?

Source: The Guardian

Image: travis jones

De-bogging yourself

A wooden boardwalk meanders through a wetland with tall grasses, sporadic small trees, and patches of open water. The landscape is bathed in warm sunlight, suggesting early morning or late afternoon. In the background, a denser group of trees is visible against a partly cloudy sky.

I’ll not do this often, and I obviously encourage you to read the original article, but here’s a GPT-4 summary of a fantastic post by Adam Mastroianni from the start of the year.

His topic is getting yourself out of a situation where you’re stuck, which he calls “de-bogging yourself”. I love the way he breaks it down into three different kinds of ‘bog phenomena’ and gives names to examples which fall into those categories.

Insufficient Activation Energy: Describes a lack of motivation to change, encompassing scenarios such as taking on unwanted projects (gutterballing), waiting for a perfect solution (waiting for jackpot), avoiding necessary actions out of fear (declining the dragon), and remaining in mediocre situations due to a lack of motivation to change (the mediocrity trap). Additionally, it covers obsessing over problems without seeking solutions (stroking the problem).

Bad Escape Plans: Details flawed strategies for change, including the belief that mere effort without direction will lead to improvement (the “try harder” fallacy), unrealistic expectations of future effort (the infinite effort illusion), blaming external factors (blaming God), misunderstanding the nature of problems (diploma vs. toothbrushing problems), expecting personal transformation without basis (fantastical metamorphosis), and attempting to control others' actions (puppeteering).

A Bog of One’s Own: Explores self-imposed psychological barriers to progress, such as overvaluing insignificant details (obsessing over tiny predictors) and holding unrealistic views of personal and others' problems (personal problems growth ray). It also discusses the detrimental effects of constant worry over external issues (super surveillance) and refusal to accept simple solutions (hedgehogging), culminating in the belief that personal satisfaction is unattainable (impossible satisfaction).

Source: Experimental History

Image: Maksim Shutov

The New York Times is a gaming platform

Chart showing gaming increasingly more popular than news in NYT apps

Via Garbage Day, this chart shows that The New York Times is more of a gaming platform than a news platform, in terms of time spent by visitors to their apps.

Remember when they bought Wordle? That was right at the end of January 2022 and here we are a couple of years later with games being a major driver of eyeballs on news sites.

This is inevitable, I guess, given that the majority of people get their news via headlines on social media, and that news sites increasingly have paywalls or login-gates. Still interesting though.

I am very excited about this chart because, as I wrote last month, The New York Times is a tech platform now, but, specifically, they’re a gaming platform. Which I always suspected would be the Next Big Thing in digital media and I’ve been desperate for example of how it would work.

You can track stages of internet development by the evolution of the web portal. And the biggest publishers tend to operate downstream and also mimic those portals. In the read-only age of AOL and Yahoo, you had static news sites. In the search and social age of Facebook and Google, you had aggregation and viral media. And the new age coming into focus right now is almost certainly led by interactive entertainment platforms. Entire ecosystems built around videos and games. And, like it or not, the next Pop Crave will be inside of Fortnite or, possibly, own their own version of it.

Source: Garbage Day

Borobudur

An engraving of Borobudur based on original drawings — Unknown (c. 19th century). Public Domain.

M.E. Rothwell’s Cosmographia is a frequent delight, and his latest missive really hits my sweet spot: an unknown history, a huge structure, and a bit of a mystery. I encourage you to go and read the whole thing, or at least just look at the wonderful illustrations.

Borobudur is the largest and most elaborate Buddhist temple in the entire world. Yet its origins remain a mystery.

The temple lies in central Java, Indonesia, surrounded by thick jungle and a ring of mountains. It’s suspected its construction, amid an area with no traces of any other ancient buildings, palaces, or cities, may have been begun by the Shailendra and/or Sanjaya dynasties of 8th century Java. It’s estimated one million stones, each weighing 100kg each, were mined from a nearby riverbed in order to build the stupa, which contains 504 statues of the Buddha and almost 3000 carved stone reliefs.

No one quite knows why such a vast Buddhist edifice was built in a primarily Hindu area, some 5000km away from the centre of Buddhist thought. It’s not even known what precise function it served. The sophisticated civilisation that birthed it went into a sudden and mysterious decline within a century of the temple’s completion in the mid-9th century.

[…]

[T]he temple was ‘rediscovered’ by the man with the most British name of all time — Sir Thomas Stamford Bingley Raffles. As Governor General of the Dutch East Indies between 1811 and 1816, Raffles took great interest in the history of the island. After he heard about the existence of a huge monument deep in the jungle, he dispatched Dutch engineer Hermann Cornelius to investigate. It took his team two months to cut back the undergrowth to reveal the extent of the huge temple complex.

Source: Cosmographia

Boundaries

Swimming pool lanes and steps

I might pay for Noah Smith’s publication if it weren’t on Substack. While it’s a shame that I may never see the bit beyond the paywall of this article, there’s enough in the bit I can enough read to be thought-provoking.

He riffs off a Twitter thread by Mark Allan Bovair who points to 2015 when lots of people started to be extremely online. This changed society greatly because we started understanding the world through a political lens, both online and offline. (Although there isn’t really an ‘offline’ any more with smartphones in our pockets and wearables on our wrists.)

We like to think that our worldviews are based on facts, but they’re much more likely to be based on emotion. Given the increasingly-short social media-fueled news cycles, our tendency to favour images and video over text, and our willingness to share things that fit with our existing worldview, I think we’re in a lot of trouble, actually.

(I’d also point out in passing that the moral panic around teenagers and smartphones whipped up by commenters such as Jonathan Haidt says more about parents than it does about their kids)

Those equating this to events or technology are missing the point. There was a shift around 2015 where the “online” world spilled over into the real world and the way we view/treat each other changed.

After 2015 things in everyday life started to go through the political lens. We started to bucket people and behaviors along the political spectrum, which was largely an online behavior pre-2015. We started judging everyone as left or right, or we walked on eggshells to avoid it.

Before that, you knew your neighbor was Republican or Democrat based on their lawn signs, but it had little bearing on your daily interactions or behaviors. And it only seemed to matter every four years for a few months. Now it’s constant and pervasive.

And pre-2015 we had phones and social media, but there was more of a boundary, and most people would “log off” most of the day. The dopamine addiction, heightened by the polarization, was much lower. Only fringe message board and twitter posters spent their days arguing online, now it’s everywhere, and there’s no real boundary.

Source: Noahpinion

Eudaimonic exercise

Woman lifting weights

Audrey Watters discusses “the yips,” a term for “a form of dissociative freeze” which is bound up with trauma and can lead to performance anxiety and failure.

I have nowhere near the level of trauma that Audrey has had in her life, but I certainly recognise the use of exercise and competition (with others / with self) as a way of not addressing certain things. For example, I’ve had some kind of virus over the last few days which has made me feel weak. I was desperate to get back to the gym.

Even without the personal grief, trauma, and baggage we carry around as we age, the pandemic means that we have a lot of collective issues to process. Exercise seems benign because it doesn’t seem destructive like, for example, drug use. But anything can be an addictive behaviour — and, as Aristotle pointed out, eudaimonia does not sit at an extreme.

I can troubleshoot what went wrong at the gym on Wednesday. I can troubleshoot why I’m having trouble getting back to the pace and distance I was running before my “accident.” I have a whole list of physiological reasons why the barbell’s not moving, why my legs are moving. My age. My training. My knee. My glutes. My diet. My sleep.

But I’m starting to recognize – really recognize – some significant psychological reasons too. My trauma. My trauma. My trauma. Not just my fall, but all the trauma that I’ve experienced in the last few years, last few decades. I’ve funneled a lot of my hopes for “mental health” into the rhythms of exercise and movement, and it’s an incredibly fragile routine.

There are times when I know my body loves it. And there are times when my brain certainly does too. But there are other times, particularly when I get the yips (which, for the record don’t always look like failing at a deadlift; it can be something that happens all the time, like failing to lean forward as I run) that I’m starting to recognize now are bound up in fear and shame.

I don’t think any amount of “tracking” or “optimization” with gadgets is going to address this issue. For me or for others. Indeed, what if we’re just making things worse?

Source: Second Breakfast

Origami unicorn

Photo of origami unicorn by Jo Nakashima

Erin Kissane wrote a long essay about Threads and the Fediverse. It’s worth a read in its own right, but the thing that really stood out to me for some reason was a random-ish link to instructions for making an origami unicorn.

There is zero chance of me ever making this, but I’m passing it on in case you’re less bad at this kind of thing. For me, it’s not the folding that I find difficult, it’s the rotational 3D stuff. I even find it difficult putting the duvet cover on the right way round (much to my wife’s amusement/dismay).

This model was first designed in 2014, but this is an updated version with some “bug fixes” (legs are properly locked) and a color changed horn.

Source: Jo Nakashima

The best antidote for the tendency to caricature one’s opponent

Daniel Dennett sitting in the woods, cleaning his glasses

Daniel Dennett is a philosopher who I enjoyed reading an undergraduate studying towards a Philosophy degree. I don’t think I’ve read him since, although his book Intuition Pumps and Other Tools for Thinking is on my list of books I’d like to read.

Maria Popova has extracted four rules which Dennett cites in Intuition Pumps which originally come from game theorist Anatol Rapoport. Sounds like good advice to me, especially in this fractured, fragmented world.

How to compose a successful critical commentary:

  1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

  2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).

  3. You should mention anything you have learned from your target.

  4. Only then are you permitted to say so much as a word of rebuttal or criticism.

Source: The Marginalian

Image: Daniel Dennett (via The New Yorker)

Endlessly clever

Yellow side of a solved Rubik's cube on a yellow background

Ethan Marcotte takes a phrase used in passing by a friend and applies it to his own career. He makes a good point.

(I noticed that Marcotte’s logo resembles the Firefox imagery that was used while I was at Mozilla. I typed that organisation and his name into a search engine and serendipitiously discovered With Great Tech Comes Great Responsibility, which I don’t think I’ve seen before?)

As tech workers, we’re expected to constantly adapt — to be, well, endlessly clever. We’re asked to learn the latest framework, the newest design tool, the latest research methodology. Our tools keep getting updated, processes become more complex, and the simple act of just doing work seems to get redefined overnight.

And crucially, we’re not the ones who get to redefine how we work. Most recently, our industry’s relentless investment in “artificial intelligence” means that every time a new Devin or Firefly or Sora announces itself, the rest of us have to ask how we’ll adapt this time.

Dunno. Maybe it’s time we step out of that negotiation cycle, and start deciding what we want our work to look like.

Source: Ethan Marcotte

Image: Daniele Franchi

When should you replace running shoes?

Photograph of back half of a running shoe showing midsole

John Sutton knows more about this area than I do. Not only his he an ultramarathon runner but he works in the area of ‘carbon literacy’ and sustainability. I’m also sure that he’s correct that the claims that you need to replace your running shoes after a certain number of miles is driven by marketing departments.

Still, I’ve definitely experienced creeping lower-back pain when getting to around 650 miles in a pair of running shoes. Of course, now I’m wondering whether it’s all psychosomatic…

With age and high mileage, it is said that the midsole no longer provides the cushioning that you need to prevent injury. This is cited as the main reason that shoes need replacing on a regular basis. Again, looking at the Lightboost midsole on these shoes, I see no evidence of crushing or squashing and I certainly don’t think I can feel any difference to the foot strike than when they were new. Obviously, any change in perceived cushioning is likely to be imperceptibly gradual and I could only really confirm that the cushioning was no longer up to snuff by comparing them directly with a new pair. These shoes are at a premium price (£170) and as such, I would expect them to be made of premium materials and built to last. My visual inspection of them suggests that they are still in excellent condition.

On the face of it, I see no obvious reason why I should retire these Ultraboost Lights any time soon. However, that seems to go against industry recommendations. What if invisible midsole damage has been so gradual that I haven’t noticed it? Now that I’ve reached 500 miles, am I likely to injure myself through continued usage? As a triathlete, I know from years of bitter experience that I am far more likely to injure myself on a run than I am cycling or swimming. So, anything I can do to improve my chances of not getting injured would be a powerful incentive to act. Thus, if it could be proven scientifically that buying a new pair of trainers every 300 – 500 miles would lessen my chances of injury, then I would take that evidence very seriously indeed.

[…]

In a previous blog post I discussed the carbon footprint of a pair of running shoes (usually between 8kg and 16kg of CO2 per pair). In the great scheme of things, this is not a huge figure (until you scale up to the billions pairs of trainers sold each year and the realisation that virtually all of these are destined for landfill at end of life). My Ultraboosts have a significant content made from ocean plastic and recycled plastic which reduces their carbon footprint by 10% compared to the previous model made with non-recycled materials. 10% is better than nothing, and the use of some ocean plastic is much better than taking plastic bottles out of the recycling loop and spinning them into polyester. But, I can do a lot better than 10% by not swapping my shoes for a new pair until they are properly worn out. Simply by deciding to double the mileage and aiming for at least 1000 miles out of these shoes (hopefully more) I can at least halve the carbon footprint of my running shoe consumption.

Source: Irontwit

Human agency in a world of AI

An equilateral triangle with most of it shaded red except the top (pointy) bit which is shaded yellow. The red part is labelled 'The bit technology can do' and the yellow part is labelled 'The human bit'

Dave White, Head of Digital Education and Academic Practice at the University of the Arts in London, reflects on a recent conference he attended where the tone seemed to be somewhat ‘defensive’. Instead of cheerleading for tech, the opening video and keynote instead focused on human agency.

White notes that this may be heartening but it’s a narrative that’s overly-simplistic. The creative process involves technology of all different types and descriptions. It’s not just the case that humans “get inspired” and then just use technology to achieve their ends.

The downside of these triangles is that they imply ‘development’ is a kind of ladder. You climb your way to the top where the best stuff happens. Anyone who has ever undertaken a creative process will know that it involves repeatedly moving up and down that ladder or rather, it involves iterating research, experimentation, analysis, reflection and creating (making). Every iteration is an authentic part of the process, every rung of the ladder is repeatedly required, so when I say technology allows us to spend more time at the ‘top’ of these diagrams, I’m not suggesting that we should try and avoid the rest.

I’d argue that attempting to erase the rest of the process with technology is missing the point(y). However, a positive reading would be that, as opposed the zero-sum-gain notion, a well-informed incorporation of technology could make the pointy bit a bit bigger (or more pointy). The tech could support us to explore a constantly shifting and, I hope, expanding, notion of humanness. This idea is very much in tension with the Surveillance Capitalism, Silicon Valley, reading of our times. I’m not saying that the tech does support us to explore our humanity, I’m saying it could and what is involved in that ‘could’ is worth thinking about.

Source: David White

5 ways in which AI is discussed

An illustrated group of diverse people in a meeting room, with a large chalkboard in the background featuring an intricate drawing of a humanoid robot head filled with gears and symbols representing various aspects of technology and thought. The group appears engaged in a discussion about artificial intelligence.

Helen Beetham, whose work over at imperfect offerings I’ve mentioned many times here, has a guest post on the LSE Higher Education blog about AI in education.

She discusses five ways in which it’s often discussed: as a specific technology, as intelligence, as a collaborator, as a model of the world, and as the future of work. In my day-to-day routine, I tend to use it as a collaborator, because I have (what I hope to be) a reasonable mental model of the capacities and limitations of LLMs.

What’s particularly useful about this article is the meta-framing that more ‘productivity’ isn’t always to be valued. Sometimes, what we want, is for people to slow down and deliberate a bit more.

AI narratives arrive in an academic setting where productivity is already overvalued. What other values besides productivity and speed can be put forward in teaching and learning, particularly in assessment? We don’t ask students to produce assignments so that there can be more content in the world, but so we (and they) have evidence that they are developing their own mental world, in the context of disciplinary questions and practices.

Source: LSE Higher Education blog

14 years of Tory (mis)rule

A cup of tea in a fancy teacup on a fancy plate

I don’t even have words for how bad the last 14 years have been under the Tories. Thankfully, people who do have the words have written some of them down.

This piece in The New Yorker is very long, but even just reading some of it will help those outside the UK understand what is going on, and those inside it hold your head in shame.

Some people insisted that the past decade and a half of British politics resists satisfying explanation. The only way to think about it is as a psychodrama enacted, for the most part, by a small group of middle-aged men who went to élite private schools, studied at the University of Oxford, and have been climbing and chucking one another off the ladder of British public life—the cursus honorum, as Johnson once called it—ever since.

[…]

These have been years of loss and waste. The U.K. has yet to recover from the financial crisis that began in 2008. According to one estimate, the average worker is now fourteen thousand pounds worse off per year than if earnings had continued to rise at pre-crisis rates—it is the worst period for wage growth since the Napoleonic Wars. “Nobody who’s alive and working in the British economy today has ever seen anything like this,” Torsten Bell, the chief executive of the Resolution Foundation, which published the analysis, told the BBC last year. “This is what failure looks like.”

[…]

“Austerity” is now a contested term. Plenty of Conservatives question whether it really happened. So it is worth being clear: between 2010 and 2019, British public spending fell from about forty-one per cent of G.D.P. to thirty-five per cent. The Office of Budget Responsibility, the equivalent of the American Congressional Budget Office, describes what came to be known as Plan A as “one of the biggest deficit reduction programmes seen in any advanced economy since World War II.” Governments across Europe pursued fiscal consolidation, but the British version was distinct for its emphasis on shrinking the state rather than raising taxes.

Like the choice of the word itself, austerity was politically calculated. Huge areas of public spending—on the N.H.S. and education—were nominally maintained. Pensions and international aid became more generous, to show that British compassion was not dead. But protecting some parts of the state meant sacrificing the rest: the courts, the prisons, police budgets, wildlife departments, rural buses, care for the elderly, youth programs, road maintenance, public health, the diplomatic corps.

In the accident theory of Brexit, leaving the E.U. has turned out to be a puncture rather than a catastrophe: a falloff in trade; a return of forgotten bureaucracy with our near neighbors; an exodus of financial jobs from London; a misalignment in the world. “There is a sort of problem for the British state, including Labour as well as all these Tory governments since 2016, which is that they are having to live a lie,” as Osborne, who voted Remain, said. “It’s a bit like tractor-production figures in the Soviet Union. You have to sort of pretend that this thing is working, and everyone in the system knows it isn’t.”

Source: The New Yorker

Identifying things that don't work

Super Mario screenshot

I always find something I agree with in posts like this. Here are some of those things in a list of “things that don’t work”:

  1. Tearing your hair out because people don’t follow written instructions. You can fill your instructions with BOLD CAPS and rend your garments when this too fails. A more pleasant option is to craft supportive interfaces where people don’t need instructions. I’m convinced the best interface in history is the beginning of Super Mario Brothers. You just start.

[…]

  1. Doing unto others as you would have them do unto you. This is a beautiful idea, but often other people simply don’t have the same needs you do.

[…]

  1. Trying to figure it all out ahead of time. For hard problems, you can sit around trying to see around all corners and anticipate all possibilities. This can work—when Apollo 11 landed on the moon, everything worked the first time. But it’s really hard. If you can, it’s easier to build a prototype, learn from the flaws, and then build another one. (This, of course, contradicts the previous point.)

[…]

Things that work: Dogs, vegetables, index funds, jogging, sleep, lists, learning to cook, drinking less alcohol, surrounding yourself with people you trust and admire.

Source: Dynomight

The art of distraction

Depicts a contemporary individual in a minimalist room, gazing out at a vast sky transitioning from blue to light gray, symbolizing the move from distraction to introspection. Modern devices are present but unused, emphasizing a deliberate choice for solitude. The individual's contemplative yet uneasy demeanor reflects the struggle and importance of facing one's own thoughts.

L.M. Sacasas has written a lengthy commentary on an essay by Ted Gioia, which is well worth reading in its entirety. The main thrust of Gioia’s essay is that we have substituted ‘dopamine culture’ for the arts and creative pursuits. Sacasas believes that this is too simplistic a framing.

I’m quoting the part where he uses Pascal to show Gioia, and anyone else who holds a similar point of view, that human beings have been forever thus. Except these days we live like kings of old, where we have the means to be distracted easily and at will. I think, in general, we’re far too bothered about how other people act, and not bothered enough about how we do.

It might be helpful to back up a few hundred years and consider a different telling of our compulsive relationship to distraction, and from there to ask some better questions of our current situation. Writing in the mid-seventeenth century, the French polymath Blaise Pascal wrote a series of strikingly relevant observations about distraction, or, as the translations typically put it, diversions. Frankly, these centuries-old observations do more, as I see it, to illuminate the nature of the problem we face than an appeal to dopamine and they do so because they do not reduce human behavior to neuro-chemical process, however helpful that knowledge may sometimes be.

Pascal argued, for example, that human beings will naturally seek distractions rather than confront their own thoughts in moments of solitude and quiet because those thoughts will eventually lead them to consider unpleasant matters such as their own mortality, the vanity of their endeavors, and the general frailty of the human condition. Even a king, Pascal notes, pursues distractions despite having all the earthly pleasures and honors one could aspire to in this life. “The king is surrounded by persons whose only thought is to divert the king, and to prevent his thinking of self,” Pascal writes. “For he is unhappy, king though he be, if he think of himself.”

We are all of us kings now surrounded by devices whose only purpose is to prevent us from thinking about ourselves.

Pascal even struck a familiar note by commenting directly on the young who do not see the vanity of the world because their lives “are all noise, diversions, and thoughts for the future.” “But take away their devices diversions,” Pascal observes, “and you will see them bored to extinction. Then they feel their nullity without recognizing it, for nothing could be more wretched than to be intolerably depressed as soon as one is reduced to introspection with no means of diversion.”

I don’t know, you tell me? I wouldn’t limit that description to the “young.” What do you feel when confronted with a sudden unexpected moment of silence and inactivity? Do you grow uneasy? Do you find it difficult to abide the stillness and quiet? Do your thoughts worry you? Solitude, as opposed to loneliness, can be understood as a practice or maybe even a skill. Have we been deskilled in the practice of solitude? Have we grown uncomfortable in our own company and has this amplified the preponderance of loneliness in contemporary society? Recall, for instance, how Hannah Arendt once distinguished solitude from loneliness: “I call this existential state [thinking as an internal conversation] in which I keep myself company ‘solitude’ to distinguish it from ‘loneliness,’ where I am also alone but now deserted not only by human company but also by the possible company of myself.”

It seems to me that these are all now familiar issues and tired questions. As observations about our situation, they now strike me as banal. We all know this, right? But perhaps for that reason we do well to recall them to mind from time to time. After all, Pascal would also tell us that the stakes are high, quite high. “The only thing which consoles us for our miseries is diversion, and yet this is the greatest of our miseries,” he writes. “For it is this which principally hinders us from reflecting upon ourselves, and which makes us insensibly ruin ourselves. Without this we should be in a state of weariness, and this weariness would spur us to seek a more solid means of escaping from it. But diversion amuses us, and leads us unconsciously to death.”

Source: The Convivial Society

The problem with private property societies

A panoramic view of a mountain range with peaks in Dark and Light Gray, topped with glowing crystals in Bright Red and Yellow. The scene features a Blue river reflecting the sky and crystals, blending natural majesty with fantasy elements.

I still subscribe to a few author’s publications on Substack, although I wish they’d leave the platform. One of these is Antonia Malchik’s On The Commons whose posts often include a turn of phrase which really resonates with me.

This week, I’ve been listening to Ep.24 of Hardcore History: Addendum where the host, Dan Carlin, interviews Rick Rubin, the legendary music producer. Towards the end, Rubin turns the tables and asks Carlin a few questions. One of them is about what life was like before land ownership. Carlin, who usually hugely impresses me, seemed to suggest that humans have always owned land in one way or another, and that it’s only aberrations where it was collectively owned.

I’m not sure that’s true. I think Carlin would do well to read, for example, Dave Graeber’s The Dawn of Everything: A New History of Humanity. Private ownership of everything is something that seems to be burned into the American psyche. But it doesn’t have to be this way. As Malchik points out in her post, private ownership within a capitalist economy is essentially why we can’t have nice things.

In their book The Prehistory of Private Property, authors Karl Widerquist and Grant S. McCall repeatedly go back to the main difference that they see in a private property society versus one where private ownership of, say, land, much less water and food, is unknown: freedom to leave. That is, if you want to walk away from your people, or your place, can you do so and still support yourself? Can you walk away and find or make food, shelter, and clothing? In non-private property societies, the freedom to walk away and still live just fine is the norm. In private property societies, it’s almost nonexistent. You have to work to make rent. Land-rent, you might call it. Someone else owns the land, and you have to pay to live on it.

The extent to which this reality runs counter to most of our existence, even if we’re just counting the few hundred thousand years that Homo sapiens have been here and not the millions of years of hominin evolution before that, is mind-bending. There have been territories and civilizations and controlling empires for thousands of years all over the world, but for most of our species’ existence, most humans had some kind of freedom to live on, with, and from land without needing to pay someone else for the privilege of existing. Until relatively recently.

We can’t all spend our time as we would wish not just because capitalism allows a few humans to hoard an increasing amount of money and power, but because the planet’s dominant societies force land to be privately owned, and make access to food and clean water something we have to pay for.

Source: On The Commons

More equal societies perform better

Scatter plot titled 'Unequal Outcomes' indicating that nations with larger gaps between rich and poor tend to have worse health, social, and environmental problems. Data points for various countries are plotted against income inequality (Gini coefficient) on the x-axis and an index of health, social and environmental problems on the y-axis. The UK is positioned in the upper middle, suggesting it has higher income inequality and more health, social, and environmental issues compared to countries like Belgium, Netherlands, and the Nordic countries, but less than the United States and Israel.

It’s easy to say that you hate the Tories. The reason, of course, is that while they’re in government they institute policies and pass laws that make the country more unequal. This is problematic for everyone, not just those impoverished.

This well-referenced article is published in Nature. I’d also check out this video I saw posted to LinkedIn (but originally from TikTok) about the argument about more capitalism not being better.

Even affluent people would enjoy a better quality of life if they lived in a country with a more equal distribution of wealth, similar to a Scandinavian nation. They might see improvements in their mental health and have a reduced chance of becoming victims of violence; their children might do better at school and be less likely to take dangerous drugs.

[…]

Many commentators have drawn attention to the environmental need to limit economic growth and instead prioritize sustainability and well-being. Here we argue that tackling inequality is the foremost task of that transformation. Greater equality will reduce unhealthy and excess consumption, and will increase the solidarity and cohesion that are needed to make societies more adaptable in the face of climate and other emergencies.

[…]

Other studies have also shown that more-equal societies are more cohesive, with higher levels of trust and participation in local groups16. And, compared with less-equal rich countries, another 10–20% of the populations of more-equal countries think that environmental protection should be prioritized over economic growth. More-equal societies also perform better on the Global Peace Index (which ranks states on their levels of peacefulness), and provide more foreign aid. The UN target is for countries to spend 0.7% of their gross national income (GNI) on foreign aid; Sweden and Norway each give around 1% of their GNI, whereas the United Kingdom gives 0.5% and the United States only 0.2%.

Source: Nature

Toward the ad-free city?

An abstract, vibrant community scene in Sheffield, with figures of varying ages engaging in transforming a public space from advertisement-dominated to a green, communal area, symbolized by bright, playful shapes and colors.

I can’t stand adverts whether appearing on the web (adblockers!), TV (sound off!) or on billboards (ignore!) It feels like mind pollution to me.

I’m glad that Sheffield, a city I called home for three years while at university, has decided to do something about the most pernicious forms of advertising. It’s particularly interesting that they’ve done a cost/benefit analysis against the cost to “the NHS and other services”.

Adverts for a wide range of polluting products and brands, including airlines, airports, fossil fuel-powered cars (including hybrids) and fossil fuel companies, will not be permitted on council-owned advertising billboards under the new Sheffield City Council Advertising and Sponsorship Policy. The council’s social media, websites, publications and any sponsorship arrangements will also be subject to the restrictions.

[…]

This breaks new ground in the UK, with Sheffield going further than any other council to remove polluting promotions. Sheffield declared a climate emergency in 2019, alongside many other local councils. This step demonstrates a real commitment to reducing emissions, driving down air pollution, and encouraging a shift towards lower-carbon lifestyles.

[…]

By including specific criteria that prioritises small local businesses, the policy also aims to protect Sheffield’s local economy. After consultation with other councils and outdoor advertising companies, Sheffield’s Finance Committee concluded that the financial impact of the policy was likely to be low (approx. £14,000-£21,000) compared to the costs incurred via pressures on the NHS and other services.

Source: badvertising

The impact of the pandemic

A solitary child in a playroom, wearing bright red and blue, plays with colorful blocks, surrounded by fading grays, symbolizing isolation and developmental challenges during the pandemic.

This is a difficult read. Without even going into the breakdown in social relations and trust, it lays out the health and development impact of the pandemic for different age groups.

I can only thank my lucky stars that neither of our kids weren’t born in 2020. It still had an effect on them, in different ways; thankfully, that doesn’t seem to have been in terms of health or development.

The article attempts to end on a positive note, which I’ve included here. But it’s difficult to see that, unless a newly-elected Labour government manages to completely turn things around 180-degrees from the direction we’re headed under the Tories, things getting much better soon.

Across all age groups, the pandemic appears to have chipped away at health and the NHS treatment that people receive.

The challenge of reversing these trends can appear overwhelming and insurmountable, but recognising the scale of a problem can also, in time, galvanise a proportionate response.

“There are parallels with the Industrial Revolution, which was really bad for health inequalities,” said Steves. “But that was followed by a period of philanthropy, government leadership and infrastructure changes. The pandemic does have a legacy that’s important for health. So we need to also think about how this could be a major opportunity.”

Source: The Guardian

Microcast #104 — Questioning uncritical acceptance

An abstract, colorful representation of a microcast featuring geometric shapes. The central abstract microphone is crafted from overlapping circles and lines in bright red, yellow, and blue, set against a light and dark gray gradient background. Surrounding the microphone, abstract sound waves are depicted as concentric circles and erratic lines, capturing the essence of broadcasting in a vibrant, non-literal form.

A microcast to respond to a thread on the Fediverse about uncritical acceptance of new technologies .

Show notes

Bluesky's approach to decentralised moderation

Bluesky account with option to follow for moderationModeration toggle screenshot

Over the last 5-6 years I’ve had to think deeply about moderation in decentralised networks, first for MoodleNet and then for Bonfire. In that time, a new network has come along called Bluesky, seeded with money from Twitter (pre-Musk).

Bluesky (atpro) and Fediverse apps such as Mastodon, Pixelfed, and Bonfire (ActivityPub) use different protocols. There’s no reason why they can’t be bridged, but attempts to do so have met with some hostility. Moderation in ActivityPub-compatible networks rely on the server/instance that you’re on. There’s advantages to this, but I guess the downside is that if you like the people but not the moderation policy, you’ve got to decide to stick or twist.

What Bluesky is doing is something similar to something Bonfire has proposed: allowing people to follow accounts that focus on moderation. This means that you can decide to, for example, dial down the profanity, or mark things as spam based on a definition you share with someone else.

Today, we’re excited to announce that we’re open-sourcing Ozone, our collaborative moderation tool. With Ozone, individuals and teams can work together to review and label content across the network. Later this week, we’re opening up the ability for you to run your own independent moderation services, seamlessly integrated into the Bluesky app. This means that you’ll be able to create and subscribe to additional moderation services on top of what Bluesky requires, giving you unprecedented control over your social media experience.

At Bluesky, we’re investing in safety from two angles. First, we’ve built our own moderation team dedicated to providing around-the-clock coverage to uphold our community guidelines. Additionally, we recognize that there is no one-size-fits-all approach to moderation — no single company can get online safety right for every country, culture, and community in the world. So we’ve also been building something bigger — an ecosystem of moderation and open-source safety tools that gives communities power to create their own spaces, with their own norms and preferences. Still, using Bluesky feels familiar and intuitive. It’s a straightforward app on the surface, but under the hood, we have enabled real innovation and competition in social media by building a new kind of open network.

[…]

Bluesky’s vision for moderation is a stackable ecosystem of services. Starting this week, you’ll have the power to install filters from independent moderation services, layering them like building blocks on top of the Bluesky app’s foundation. This allows you to create a customized experience tailored to your preferences.

Source: Bluesky blog

Barnacle ball

A half-submerged football with a cluster of mussels attached below the waterline, floating in clear greenish waters.

Some great photos in this year’s British Wildlife Photography Awards. I was going to share the black and white one of the mountains, but this one, the overall winner, is incredibly powerful. It’s also just a fantastically-composed image.

An incredible image of a football covered in goose barnacles is the winner of this year’s British Wildlife Photography Awards.

The picture was chosen from more than 14,000 entries by both amateur and professional photographers.

The photograph, which also won the Coast and Marine category, was taken by Ryan Stalker.

“Above the water is just a football. But below the waterline is a colony of creatures. The football was washed up in Dorset after making a huge ocean journey across the Atlantic,” says Stalker.

“More rubbish in the sea could increase the risk of more creatures making it to our shores and becoming invasive species.”

Source: BBC News

Anti-AI hyperbole

A figure resembling Ed Zitron pops a large, shimmering 'AI Hype' bubble against a tech city skyline, with digital particles and a palette of light gray, dark gray, bright red, yellow, and blue.

This post has been going around my networks recently, so I’ve finally got around to giving it a read. The first thing that’s worth pointing out is that the author, Ed Zitron, is CEO of a tech PR firm. So it’s no surprise that it’s written in a way that’s supposed to try and pop the AI hype bubble.

I’m not unsympathetic to Zitron’s position, but when he talks about not knowing anyone using ChatGPT, I don’t think he’s telling the truth. I’m using GPT-4 every day at this point, and now supplementing it with Perplexity.ai and Claude 3. A combination of the three can be really useful for everything from speeding up idea generation to converting a bullet point list to a mindmap.

One thing I’ve found AI assistants to be incredibly powerful for is to spot things I might have missed, to provide a different perspective. Or even to put in a list of things and to generate recommendations based on that. You can do this for music playlists through to business competitors.

Every time Sam Altman speaks he almost immediately veers into the world of fan fiction, talking about both the general things that “AI” could do and non-specifically where ChatGPT might or might not fit into that without ever describing a real-world use case. And he’s done so in exactly the same way for years, failing to describe any industrial or societal need for artificial intelligence beyond a vague promise of automation and “models” that will be able to do stuff that humans can, even though OpenAI’s models continually prove themselves unable to match even the dumbest human beings alive.

Altman wants to talk about the big, sexy stories of Average General Intelligences that can take human jobs because the reality of OpenAI — and generative AI by extension — is far more boring, limited and expensive than he’d like you to know.

[…]

I believe a large part of the artificial intelligence boom is hot air, pumped through a combination of executive bullshitting and a compliant media that will gladly write stories imagining what AI can do rather than focus on what it’s actually doing. Notorious boss-advocate Chip Cutter of the Wall Street Journal wrote a piece last week about how AI is being integrated in the office, spending most of the article discussing how companies “might” use tech before digressing that every company he spoke to was using these tools experimentally and that they kept making mistakes.

[…]

Generative AI’s core problems — its hallucinations, its massive energy and unprofitable compute demands — are not close to being solved. Having now read and listened to a great deal of Murati and Altman’s interviews, I can find few cases where they’re even asked about these problems, let alone ones where they provide a cogent answer.

And I believe it’s because there isn’t one.

Source: Where’s Your Ed At?

Microcast #103 — Microphones and Moving to Micro.blog

An abstract, colorful representation of a microcast featuring geometric shapes. The central abstract microphone is crafted from overlapping circles and lines in bright red, yellow, and blue, set against a light and dark gray gradient background. Surrounding the microphone, abstract sound waves are depicted as concentric circles and erratic lines, capturing the essence of broadcasting in a vibrant, non-literal form.

The first microcast of 2024 and also the first on micro.blog. This one discusses the reasons for the move, and how it went.

Show notes

Taking seriously the noise and free-floating anxiety

Cards with images and words such as 'interoperability', 'diversity', and 'consent'

I’ve always enjoyed Helen Beetham’s writing, and her more recent work on AI has filled a gap for me after Audrey Watters shifted gears. With this post, I’m most interested in the ending, in which Helen reflects on how much time it takes to refute the bullshit.

She links to notmy.ai which outlines why AI is a feminist issue (clue: patriarchal by design, embedded racism, precarious labour).

[W]hen I started a blog about critical approaches to technology in education, I never imagined that generative AI would fill my own horizon. It has not been entirely fun. A colleague recently described it to me as ‘the constant intellectual labour involved in having to take seriously the noise and free-floating anxiety’, and that labour feels increasingly pointless. Talking ‘AI’ down is still talking about AI, it still adds to the vortex of attention. There are other many more important things in the world to be anxious about (though ‘AI’ seems set to make all of them worse).

AI will probably give paying users a new interface on their work and play that will be fun for a while, and then invisible - part of an ever-more-immersive life online. When ROI falters there will be another story (or a newer, better, ‘smarter’ version of the AI story) to sell hyper-productivity and automation to businesses, and to keep driving capital towards the biggest platforms. I just keep thinking that the idea all this has something to do with knowledge or learning is so obviously detrimental to education, and so obviously stupid and wrong, that education will find a way of talking back. Or - because alternative stories are available - will tell these stories confidently, so I can think about something else.

Source: imperfect offerings

Image: The Oracle for Transfeminist Technologies

Austerity is not efficiency

A surreal illustration depicting the UK shaped as a tangled web, representing the failing council services with frayed and broken strands, set against stormy clouds.

The UK is currently limping towards a General Election which, although it hasn’t been called yet, will probably be in October 2024. The past 14 years have absolutely decimated my country, with cuts to public services, a massive loss of trust in public officials, and spiraling inequality. The costs of Brexit cannot even be put into meaningful terms.

This article in The Guardian talks about the impact of Tory cuts to English councils, meaning that they have had to cut services. Some councils, either through bad luck, incompetence, or both, are in a really bad way. I wonder how much this will affect internal migration in the UK, as up here in Northumberland the NHS performs well, our bins are collected regularly, and the quality of life (I would say) is better.

Most of us now know the basics. In 2023, Birmingham city council – which is controlled by Labour, and is reckoned to be Europe’s largest local authority – effectively went bankrupt. There were three key reasons: massive cuts in funding from Whitehall, the cost of the belated resolution of the council’s gender pay gap, and the mind-boggling mishandling of a new IT system. In the midst of the rising need for council services – much of which was rooted in all the dislocation and disaster of the Covid crisis – all this spelled disaster. Now, many of the city’s services must be either hacked down or done away with, in pursuit of savings of about £300m over two years. As far as anyone understands it, this is the deepest programme of local cuts ever put through by a UK council.

[…]

Birmingham may be an outlier, but comparable stories are playing out all over England: in Nottingham, Somerset, Hampshire, Leicester, Bradford, Southampton and more. The House of Commons levelling up, housing and communities select committee puts English councils’ current financial gap at about £4bn a year, which could have been filled more than twice over by the money Jeremy Hunt used for that almost meaningless cut in national insurance. He seems to still think that councils must sink or swim: even more depressingly, he and his allies in the rightwing press have reprised old and stupid rhetoric about millions supposedly being wasted on “consultants” and “diversity schemes”.

[…]

Continuing austerity does not just kill people’s services; it has long since warped most political debates about what we should expect from the state. In lots of places, squalor, mess and festering social problems are now seen as the norm. So too is a scepticism about people’s need for help, which is endlessly encouraged by politicians and people in the media. That absurd opportunist Lee Anderson made his name by claiming that food banks were “abused” by people who didn’t need them. Now, the Times columnist Matthew Parris claims to “not believe in ADHD at all” and says that autism is “a much abused diagnosis”, while other voices insist that parents whose disabled children get some dependable help from their local councils are the possessors of “a golden ticket”. In both cases, the insidious process is much the same. First, services fail. Then, casting doubt on the resulting pain and letting the people responsible off the hook, there are loud suggestions that levels of need may not have been that great in the first place. As a result, austerity can be recast as efficiency, a move that always appeals to politicians, of whatever party.

Source: The Guardian

Reframing as small i's

he image depicts a small letter 'i' in an imaginative, fancy font, designed in black against a white background.

This is such an important reframing. I, for one, definitely have a tendency to let my latest success, failure, or injury define me. It ends up being a bit of a rollercoaster.

“I am such a loser, I can’t even do >insert attempt here< “. This creates an overblown sense of guilt (and self-pity), robbing us of any empowerment that might be had and tends to leave us moping in a corner somewhere. The ‘Big I, Small I’ visual gives us a perspective check. The Big I represents you as the shiny complex structure that is you altogether, while the Small I’s each represent one aspect of you. Every small I (for example: you trying to reach a deadline with good quality content) is one of many small I’s. It is part of you and therefore not to be trifled with but it does not define you. You are not this one thing that you are trying to achieve, you are many.

Source: Priscilla Haring

Image: DALL-E 3

Scaling AI requires 'muddling through'

The image depicts a surreal landscape where whimsical figures attempt to stack bricks to build a skyscraper. The structure defies the laws of physics, twisting and turning in dream-like ways. It's illuminated in an abstract, surreal light, with a color palette of Light Gray, Dark Gray, Bright Red, Yellow, and Blue. Some bricks appear almost melting, falling off as the skyscraper bends impossibly, showcasing the chaotic and doomed attempt.

The always thought-provoking Venkatesh Rao poses the question of what kind of scaling we need for AI. His analogy with building skyscrapers out of bricks-and-mortar is an interesting one. It’s a long read, but worth it.

The part which really resonated with me is when Rao starts talking about governance for AI agents, which needs to be in the form of liberal democracy rather than autocracy. “Regulating [AIs] will look like economic regulation, not technology regulation” he says.This is why you need people who can think philosophically about technology and the future of humanity.

Fascinating.

To keep AI evolving, we need the various heterodoxies to cohere into one or more alternative positive visions of how to build the technology itself, not just creative reframes that make for stimulating cocktail party conversations. Into one or more new idea of what sort of AI we should attempt to built, in an engineering rather than ethical sense of should. As in well-posed, architecturally sound, and conceptually elegant enough to handle whatever we choose to throw at it.

I’m asking the question in the same sense as one might ask, how should we attempt to build 2,500 foot skyscrapers? With brick and mortar or reinforced concrete? The answer is clearly reinforced concrete. Brick and mortar construction simply does not scale to those heights. Culture wars in architecture and urbanism around whether or not skyscrapers are a good idea for society are moot unless you have good options for actually building them.

[…]

The current idea seems to be: If we build AI datacenters that are 10x or 100x the scale of todays (as Sam Altman appears to want to), and train GPT-style models on them that are also correspondingly scaled up, we’ll get to the most interesting sorts of AI. This is like trying to build the Burj Khalifa out of brick-and-mortar. It’s a fundamentally unsound idea. Problems of data movement and memory management at scale that are already cripplingly hard will become insurmountable. Just as the very idea of a 2,500 foot high brick structure is unsound because bricks don’t have the right structural properties, the current “bricks” of modern AI (to a first approximation, the “naked” Large X Models thinly wrapped in application logic) are the wrong ones.

[…]

[Going] back to the analogy to reinforced concrete. [The AIs Rao is arguing for] are fundamentally built out of composite materials that combine the constituent simple materials in very deliberate ways to achieve particular properties. Reinforced concrete achieves this by combining rebar and cement in particular geometries. The result is a flexible language of differentiated forms (not just cuboidal beams) with a defined grammar.

[They] will achieve this by combining embodiment, boundary management, temporality, and personhood elements in very deliberate ways, to create a similar language of differentiated forms that interact with a defined grammar.

Source: Ribbonfarm

Born to run

Camille Herron surrounded by bubbles and a cameraman pointing a camera at her

I set myself the target of running 1,000km this year. Camille Herron just ran 900km in six days 🤯

When the FURTHER event started last Wednesday, Herron was already the holder of multiple world records from 50 to 250 miles. A small crowd gathered under four towers of stage lights and rows of orange and white tents. The 42-year-old was in shades, a water bottle stuffed in the crotch of her shorts. On day one, she chugged a Coke float and ran 133 miles. Day two, she downed tacos and added another 113 miles. On 8 March International Women’s Day, she broke the American 48-hour road record for women. More would follow.

Each time Herron broke a record, she held her arms out wide, her hands pointing to the sky as if to say, “isn’t this incredible?” The fact that she is openly awed by what she does has at times made her a target in the ultrarunning community. Her whimsical pre-race mantra of “letting the magic come out” only adds to that. But it’s hard to argue with the numbers. And the numbers and records were piling up: a new 300km mark, the American 48-hour road record, a new 300-mile road record, the women’s 500km world record, the women’s 500 mile. When she completed the latter, she danced around the start line in pink compression socks, celebrating with high fives and hugs.

[…]

But Herron isn’t done. As the sun rises over the Santa Rosa Mountains, she lifts herself to her feet once more. One more push. One more loop, then two, three. She reaches 900km, another record, then it’s over. In her wake are 11 world records recognized by GOMU and a world best performance by the IAU. Either way, the numbers on the LED screen read a clear 560.3 miles. Above is one word in all caps: FURTHER.

Source: The Guardian

Consciousness porn

An ultra-high-resolution image depicting a nighttime scene on a deserted urban street leading to a railway crossing. The street is wet, with light reflections and white road markings. A building on the left side has a light gray facade with bright red graffiti, a red-lit window, and is lined with pipes and wires. A brown metal fence with barbed wire and a leaning pole with a yellow stripe is on the right, separating the sidewalk from the railway tracks. The background showcases illuminated street lamps, greenery, and building silhouettes, evoking a quiet, mysterious urban atmosphere.

Sometimes, I come across a post which comes from leftfield and is almost impossible to quote in a meaningful way. This one revolves around three things I’ve never even heard of, let alone experienced. The author puts them under the heading ‘consciousness porn’, with the three examples being quite diverse.

What I find so fascinating is that there are layers upon layers to this. For example, one of the commenters points out that the guy shooting the long videos of walks around Tokyo posted to his YouTube channel that he was depressed, wasn’t going to do any more after uploading the ones he’d already recorded, and didn’t know why anyone watched them in the first place.

It took me a while to comprehend why my son would watch other people play videogames. After a while I began to understand that there was an element of learning how to improve his own gameplay, but there was also an aesthetic to it. This consciousness porn seems to be almost pure aesthetic. I guess the next stage is endless versions of this created using AI.

Rambalac does everything he can to avoid intruding on the world he is observing, to be, like the character in Christopher Isherwood’s novel Goodbye to Berlin, “a camera with its shutter open, quite passive, recording, not thinking.” But occasionally we catch a glimpse of his reflection in a store window or elevator mirror – oh, he’s not Japanese! And in every frame we feel his presence – quiet, sweet, and a little sad, stopping to watch a black cat thread its way across a cluttered stoop, showing us the label of the green tea he’s bought from a vending machine, looking away politely from a fellow pedestrian, or standing still, on a rainy night, before the red gate that marks the entrance to a Shinto shrine, entranced.

[…]

I often listen to dub techno while watching Rambalac videos, which amplifies their chill, phenomenological trippiness, and makes me feel like I’m experiencing a mutant artform invented by William Gibson. An artform I call consciousness porn.

Source: Donkeyspace

Image: DALL-E 3

Vendor lock-in writ large

An imposing castle represents a tech giant's platform in a landscape, with chaos outside its walls as people are turned away, symbolizing rejected immigrants. Inside, occupants tear down workers' rights banners, while a monstrous AI figure made of gears and wires looms above, casting a shadow over workers who train it before being dismissed. In the background, a graveyard of cars and bank vaults symbolizes the failure of technological promises, illustrating the consequences of vendor lock-in and the illusion of progress.

People call me prolific, but I’m nothing compared to Cory Doctorow. I can’t keep up with his mostly-daily newsletters, never mind his longer-form stuff.

In this piece, he talks about one of his favourite topics: vendor lock-in. However, the genius lies in the way that he explains, in a way that sounds so obvious that it feels like scales falling from your eyes, why people blame immigrants for the lack of jobs. The real, historical reason for the decline in good jobs is because employers (with government help) smashed the unions.

Moving onto AI, he points out the “monstrous proposition” of AI companies who suggest that their clients train models based on workers, then fire the workers, replacing them with the AI products. The latter are nowhere near good enough to actually do the workers' jobs, but all the AI companies need to do is sell the proposition.

That’s why there’s no jobs around at the moment: an illusion based on VC money. Remember how Uber was going to mean self-driving cars and the end of public transport? Remember how cryptocurrencies were going to mean the end of banks? Here we go again.

Bruce Schneier coined the term “feudal security” to describe Big Tech’s offer: “move into my fortress – lock yourself into my technology – and I will keep you safe from all the marauders roaming the land”

It’s a tried-and-true bullying tactic: convince your victim that only you can keep them safe so they surrender their agency to you, so the victim comes under your power and can’t escape your cruelty and exploitation. The focus on external threats is key: so long as the victim is more afraid of the dangers beyond the bully’s cage than they are of the bully, they can be lured deeper and deeper until the cage-door slams shut.

But here’s the thing about trusting a warlord when he tells you that the fortress’s walls are there to keep the bad guys out: those walls also keep you in. Sure, Apple will use its control over Ios to stop Facebook from spying on you, but when Apple spies on you, no one can help you, because Apple exercises total control over all Ios programs, including any that would stop Apple from nonconsensually harvesting your data and selling access to it:

Source: Pluralistic

Be careful what you wish for

I’ve already posted a thread about this on the Fediverse, so I’ll just copy-and-paste then tweak from that rant. TL;DR: Adobe have published a (commissioned) report about digital credentials, everyone’s over-excited, and I want to sound a note of caution.

A young woman sits in the foreground focused on her laptop, which is the source of a swirling, colorful vortex of digital shapes against a greyscale backdrop of a contemporary cityscape, highlighting the intersection of technology and modern life.

About 15 years ago, it was clear that Higher Education was about to become significantly ‘unbundled’ in western countries. The trend had started even before the start of my career, but accelerated around that time. We had things like Pearson being given degree-awarding powers, Massive Open Online Courses (MOOCs) allowing anyone to join university-provided courses, and the first blushes of digital credentials.

As thinkers such as Audrey Watters pointed out, unbundling is all well and good, but you better be damned careful about who’s doing the ‘rebundling’ and for what purpose. So, of course, the MOOC providers turned into non-profit and for-profit providers that met with various success (edX, Udacity, FutureLearn, etc.) These all needed ways to ‘certify’ their courses. Some partnered with universities, others went alone with their own credentialing.

The digital credentials space has always been a difficult one to keep track of. That’s because it’s decentralised by design, just like the Fediverse, and… email. So while there are absolutely standards that make the whole thing work (Open Badges, etc.) it’s always been difficult to talk about numbers and how people are using digital credentials. In true “the future is here, it’s just unevenly distributed” style, some sectors have seen explosive adoption of digital credentials.

IBM, for example, have issued millions of digital credentials for things that you wouldn’t necessarily go to university to learn. It’s a big deal, and leads to decently-paying (and often high-paying) jobs. That’s great, and I point to this a lot. But it’s not like IBM did it out the goodness of their hearts. They’re looking to remove the degree requirement for their jobs, which of course has a long-term depressing effect on wages.

Coming back to Adobe, while it’s great that they’ve suddenly discovered digital credentials and have commissioned A Report To Tell Us How Great They Are, we’d be naive to think that this is a benevolent act. What they’re doing, it seems, is positioning the ‘Adobe Certified Professional’ digital credential as the one that you need in that particular industry. That means tying ‘creativity’ to using certain tools, and having a very privatised ‘rebundling’ of knowledge and skills.

So, be careful what you wish for, I guess. Could a lot of this have been foreseen over a decade ago? Absolutely. But the problem, as many on the Fediverse will recognise, is that there’s a vested interest in not recognising the diversity of human experience. Digital credentials could and should be used to recognise lifelong and lifewide learning. They can be used to showcase the breadth of our experience in a holistic way.

That’s not what brands are interested in, though. Brands are interested in capturing and enclosing you as data points to be packaged up and sold alongside their proprietary products.

I’m sure there are plenty of people in my network (especially on LinkedIn) which will see this as an over-reaction. “But Doug, isn’t bringing more attention to the space worthwhile?” Not if the lens that is used to understand the space is reductionist and perpetuates some of the very problems we’re trying to solve.

Gone are the days of a college degree being the only key to unlock meaningful careers. Employers today need job candidates and employees with new, in-demand skills, and they expect to see them demonstrated in a variety of ways beyond a college transcript. With the rise of remote work, digital transformation, and AI, today’s most in-demand skills — creative problem-solving, visual communication, and digital fluency — are especially hard for hiring managers to identify in job application materials.

To shed light on this evolving landscape, Adobe has just released a research white paper, “The Creative Edge: How Digital Credentials Unlock Emerging Skills in the Age of AI.” Conducted by Edelman, the results of this commissioned global research study outline the role digital credentials play in helping career seekers get hired by showcasing their digital and creative skills.

Source: How digital credentials unlock emerging skills in the age of AI

Image: DALL-E 3

Scintillating scotomas

Manuscript illumination by Hildegard of Bingen resembling a visual disturbance similar to that experienced during a migraine

It’s weird to think that I was about my son’s age (17) when I started getting migraines. It wasn’t a massive surprise: there were migraineurs on both sides of my family, including my mother and my paternal grandmother. I may have literally dodged a bullet: being susceptible to migraines disqualifies you from pretty much every role in the Royal Air Force, to which I was in the process of applying.

These days, partly through stress management, ensuring I get good sleep, avoiding dehyrdration, and taking some supplements I’ve found helpful, my migraines are both less frequent and less extreme. They’re still part of who am, though, and I know to get off screens immediately and take some of my meds if my vision starts getting distorted.

How to describe a scintillating scotoma? It’s one of the most common symptoms of a migraine, but unless you’ve had one, it sounds unreal. A scintillating scotoma is like a barbed ripple in the pool of sight. It’s a skeletal Magic Eye raised up from the flatness of the world. It’s a glare on the tarmac as you drive West at sunset on a rain-slick freeway—only when you turn your head, it’s still there, so you have to pull over, close your eyes, and wait out the slow-motion firework working its way across your brain.

[…]

In the absence of an organizing mind, everything comes unglued. Faces go missing and dark holes seem to eat half the universe. Migraine sufferers can experience the uncanny sense of consciousness doubling known as déja-vu, or its cousin, jamais-vu, in which the world feels newly-made. The world might feel suddenly very unreal, fracture into a mosaic, or slow to a stop-motion pace, dropping frames. The self might cleave in two in a fit of somatopsychic duality. Writing about these bizarre and horrifying perceptual phenomena, the late Oliver Sacks observed that migraines “show us how the brain-mind constructs ‘space’ and ‘time,’ by demonstrating what happens when space and time are broken, or unmade.”

[…]

According to Migraine Art: The Migraine Experience From Within, migraine auras are as old as humankind—so old, perhaps, that they may have inspired the geometric forms of Stone Age cave drawings. Which makes recent attempts to generate migraine auras using convolutional neural networks seem particularly poignant to me: what began in stone, animated by the hot flicker of firelight, continues 5,000 years later, deep in the heart of servers whose mineral components were mined from the same dark Earth.

Source: Wild Information

Image: Manuscript illumination by Hildegard of Bingen, 1511 (who was a migraineur)

Career vs Job

A solitary figure reflects on the edge of a cliff at sunrise, representing the importance of contemplation in distinguishing between a job and a career. The serene landscape and color scheme emphasize reflection and the broader view of one's professional life.

This post by Tim Klapdor is definitely related the Aeon article I quoted about carving out time for reflection.

People are surprised when I say that I do about 20-25 hours of paid work per week. Somehow that’s ‘part time’. But I live a full life: studying, writing, taking my kids here, there, and everywhere. The only thing missing? I’d like to travel more, professionally.

A career contains a multitude of jobs. Some of them are the ones you get paid for, but many of them aren’t. And that’s often where the confusion comes into play. The paid job begins to bleed into other areas, and you associate the paid job with all the other jobs. They get lumped together as a career, but they are distinct and need to be kept separate. It’s our mind that blends them together, so every so often, we need to pull focus, reevaluate and paint in the edges to make it clear what our jobs really are.

[…]

In reflection, I can say that for the last few years, I’ve paid too much attention to my paid job and not my career. I’ve allowed the job to expand beyond its parameters and edges to consume everything around it—my time, attention, and priorities. What I need to do, and what I plan to do in 2024, is to switch that.

I want to focus on my career, not my job.

Source: Tim Klapdor

Absence is not a (defect)ion

<img src=“https://cdn.uploads.micro.blog/139275/2024/1922dd0f-2054-4f52-b82f-d17c513dfe80.webp" width=“600” height=“342” alt=“This image is a digital collage that layers photographic textures with digital painting. A monochrome urban landscape in dark gray symbolizes the conventional work environment, while vibrant pockets of red, yellow, and blue form miniature worlds floating above the city. These bubbles represent “temporary autonomous zones” where individuals can engage in purposeless action and creativity, highlighting the contrast between the daily grind and the personal sanctuaries we create for ourselves.">

I hadn’t thought of the early days of the pandemic as being akin to a general labour strike. Interesting. I could quote the entirety of this article, but I’ll just mention one thing that I haven’t included below: “It is because of its emptiness that the room is useful.” (Lao Tzu). The author of this article, David J Siegel, uses this to make the point that I’ve used as the title for this post; that absence is not defection.

The early period of the pandemic (which approximated in many respects a kind of general labour strike) gave some of us an intimation of what life lived largely off the clock can be like when much of what passes for work is suspended or slowed and we are afforded precious ‘little gaps of solitude and silence’, as the French philosopher Gilles Deleuze called them, to engage in worthy pursuits that elude us under normal circumstances. We found incomparable personal freedoms and new opportunities for enrichment and fulfilment in the cessation of many of our standard operating procedures.

Then, as everyone recalls, we were summoned back to the office. But, once we had experienced this new way of being, the prospect of returning to the old order – submitting to the control, policing and surveillance of our former workaday lives – became almost unthinkable, especially for members of a chronically insecure workforce forced to endure low pay, lack of opportunity for advancement, inflexible schedules, and a multitude of everyday insults and indignities. Perhaps the chief insult to us all is the governing assumption that we must be collocated – or collated – to do our best work, despite having demonstrated our capacity for self-directed productivity from home (or other private quarters) under the most trying circumstances.

[…]

In The Scent of Time: A Philosophical Essay on the Art of Lingering (2009), Byung-Chul Han suggests that our experience of intervals is being ‘destroyed in order to produce total proximity and simultaneity’. When everything (and everyone) is within reach at all times, we lose a sense of what it means to be in – and even to savour – transitional states of in-betweenness. As an antidote, [some authors] recommend that we ‘tarry with time’ and ‘make spaces for the play of purposeless action’.

We can, in other words, reappropriate some of the time and space being withdrawn from us. These can be reclaimed in the fugitive moments we thieve from the calendar, or they can be recovered in what the anarchist Hakim Bey in 1985 called ‘temporary autonomous zones’: undetectable underground enclaves that we carve out of the landscape of our everyday lives in order to find or free ourselves. Simultaneously, practices of disengagement might withdraw from organisations (workplaces primary among them) their extraordinary power to mediate – to dictate and direct – far too many aspects of our existence and experience. Opting to bypass certain workplace amenities and conveniences expertly designed to keep us at work – the cafeteria, the fitness centre, the dry cleaner, the onsite health clinic – might not seem like much of a tactic of rebellion, but it does its part to lessen our dependence on our employer as lifehack, helpmate or healer.

[…]

Withdrawal has an almost universally negative connotation in public life, where it is treated as the ultimate transgression and disdained as retreat or defeat – the very opposite of engagement. However, to withdraw is also, crucially, to repair – both to go to a place and to mend. From this perspective, withdrawal is not merely a defeatist tack; rather, it is, or can be, direct action for a restoration of intellectual life – the kind that is free to ask (to fully engage with) impertinent questions – in settings that have practically banished it, made it inaccessible, or are attempting to monitor and monetise it according to terms not of our choosing.

[…]

Among the questions some of us are investigating in our contemplative moments of disengagement, withdrawal, removal, retreat or escape – however we choose to designate those instances when we take our leave – are these: when, or to what extent, do our norms of organisational affiliation and attachment make us sick or otherwise compound the very problems such forms of connection are meant to solve? In what ways might our occasional absences improve our solitary and even our solidary experiences of work and of life more generally?

Source: Aeon

Claude's Prompt Library

Screenshot of Anthropic's Prompt Library

Anthropic, the organisation set up by ex-OpenAI staffers, has recently released Claude 3. This is apparently even more powerful than GPT-4, although I haven’t had a chance to play with it yet.

Alongside the release, Anthropic has also shared a Prompt Library which, I guess, is the equivalent of OpenAI’s GPTs.

Source: Anthropic Prompt Library

Sports betting and neoliberal atomisation

This image portrays a lone figure illuminated by the light of a smartphone screen, surrounded by darkness. It captures the solitary nature of sports betting through apps, contrasting the vibrant world of sports with the individual's isolation.

The only times I’ve ever betted on sports is with my father. Back when I lived at home, we’d all choose a horse in the Grand National (out of a hat) and I’d go down with him to the bookies to put the bets on. And then, when we went to a football match at Sunderland, we’d decide what bet to put on, too.

I’ve never betted on sports by myself. It’s a slippery slope, as I know what I’m like. When I was my son’s age (17) I was mildly addicted to scratchcards for a few weeks, but quit when I won enough to break even. That’s why the whole world of sports betting, which I know must be huge given that almost every Premier League football team is sponsored by a related company, is a black box to me.

Drew Austin talks about sports betting not only being the further atomisation of an activity which was at least nominally social, but also the way that it reduces a complex bundle of qualitative emotions down to a set of flat, quantitative, numbers.

As the Facebook/Google/Twitter clearnet dissolves and the internet becomes a dark forest, another relatively recent tech category offers a lens for anticipating the future of shared experience and solipsism: sports betting apps. Although largely unleashed by regulatory changes rather than technical innovation, the rise of mainstream, app-enabled sports gambling has reframed a still-powerful bulwark of mass culture as a solitary pursuit. As televised sports continue fragmenting into digital content just like everything else, sports betting creates a derivative market on top of that content, which in turn yields its own additional bounty of content. If you’ve ever bet on a game and then watched it with other people, you probably realized quickly that nobody cares about your betting angle(s) and that you have to shut up about it. You’re on your own. But if you show up at the Super Bowl party wearing a Kansas City Chiefs jersey, you are a legible entity, and everyone has something to talk to you about. To bet on sports is to share the same space (literal or figurative) with a multitude of people who have their own specific angle and only the meta-game in common. Sports gambling is even more fascinating, however, in the way it alters your brain as a spectator of the game: You exchange a complex bundle of emotional and aesthetic nuance for a purely quantitative perspective, which highlights everything that benefits you and pushes the rest to the background. It’s how it would feel to be a computer watching sports. A lot of things we do on the internet feel like that. Who needs NPCs to interact with when we all act like them anyway? We pay so much attention to how computers are learning to be human, but forget we’re also learning to act like them.

Source: Kneeling Bus

Image: DALL-E 3

Subject, Consumer, Citizen

Andrew Curry reflects on the work of Jon Alexander, author of a book called Citizens (2022). Alexander has been on a bit of a journey talking to people, and has made some discoveries. The image features three columns, each representing different societal roles: SUBJECT, CONSUMER, and CITIZEN, each with its own background color—orange, pink, and blue, respectively. For the SUBJECT, words such as DEPENDENT, RELIGIOUS, DUTY, OBEY, RECEIVE, COMMAND, PRINT, HIERARCHY, and SUBJECTIVE are listed, set against a light orange striped background. The CONSUMER column has words like INDEPENDENT, MATERIAL, RIGHTS, DEMAND, CHOOSE, SERVE, ANALOGUE, BUREAUCRACY, and OBJECTIVE, all on a pink striped background. Lastly, the CITIZEN column lists INTERDEPENDENT, SPIRITUAL, PURPOSE, PARTICIPATE, CREATE, FACILITATE, DIGITAL, NETWORK, and DELIBERATIVE, against a light blue striped background. The text is arranged vertically in a sans-serif font, and each word is placed in a horizontal alignment with its counterparts in the other columns.

I’m mainly sharing this for the diagram, which Stowe Boyd also picked up on, and provides a better commentary than I ever could. All I’ll say is that it’s good to see things laid out so clearly, although I would have put the ‘Subject’ column to the right (where it is politically) and made it an easier-to read colour!

[H]eaven knows we have a lot of Sensible Grown-Up Politicians around the place. Albanese in Australia, Starmer in the UK. But: because they have not yet realised, or acknowledged, that our political systems are failing, they don’t have the tools to deal with authoritarianism.

But it’s not just down to them. We can’t sit down in Restaurant Hope and wait for the menu. We need to be in the kitchen. […]

Authoritarians offer to replace this with a story about being a subject: if we put them in power, they will fix things for us (although they don’t, of course).

[…]

We need to believe in people if we, the people, are to have any hope for ourselves and for humanity.

Source: Just Two Things

A truly liberatory (digital) future for everyone

A pixelated skull graphic is centered on a black background, with horizontal bands of vibrant colors intersecting the image, simulating a visual glitch. The colors — light gray, dark gray, bright red, yellow, and blue — appear in sharp, fragmented lines that give the impression of the image being momentarily disrupted by digital interference. The pattern of gray crosses is subtly visible in the background, further adding to the glitch effect. This digital distortion suggests that the skull image is experiencing a moment of digital decay, reminiscent of static interference on an old television screen.

After giving a potted history of the internet and all of the ways it has failed to live up to its promise, Paris Marx suggests that we need to start over with the entire tech industry. It’s hard to disagree.

My internet habits are vastly different to what they were a decade ago. Back then, I was seven years into using Twitter, had a great following and ‘personal learning network’. The world, pre-Brexit and Trump had the seeds of the turmoil to come, but Big Tech was nowhere near as brazen as it is post-pandemic, and coked-up on AI fever dreams.

There can only be only conclusion from all of this: the digital revolution has failed. The initial promise was a deception to lay the foundation for another corporate value-creation scheme, but the benefits that emerged from it have been so deeply eroded by commercial imperatives that the drawbacks far outweigh the remaining redeeming qualities — and that only gets worse with every day generative AI tools are allowed to keep flooding the web with synthetic material.

The time for tinkering around the edges has passed, and like a phoenix rising from the ashes, the only hope to be found today is in seeking to tear down the edifice the tech industry has erected and to build new foundations for a different kind of internet that isn’t poisoned by the requirement to produce obscene and ever-increasing profits to fill the overflowing coffers of a narrow segment of the population.

There were many networks before the internet, and there can be new networks that follow it. We don’t have to be locked into the digital dystopia Silicon Valley has created in a network where there was once so much hope for something else entirely. The ongoing erosion already seems to be sending people fleeing by ditching smartphones (or at least trying to reduce how much they use them), pulling back from the mess that social media has become, and ditching the algorithmic soup of streaming services.

Personal rejection is a welcome development, but as the web declines, we need to consider what a better alternative could look like and the political project it would fit within. We also can’t fall for any attempt to cast a libertarian “declaration of independence” as a truly liberatory future for everyone.

Source: Disconnect

Image: DALL-E 3

Moderation is up to us now

A lighthouse stands tall on a rugged cliff, emitting a powerful, multi-colored beam of light that pierces through a dark, stormy sea below. The beam, in shades of light gray, dark gray, bright red, yellow, and blue, guides small boats carrying diverse internet users towards a calm, welcoming shore. This scene symbolizes the efforts of individuals and communities to navigate through the chaotic and often hostile digital ocean, seeking safe, inclusive online environments. The lighthouse serves as a beacon of hope and guidance amidst the tumultuous waters, representing the importance of creating a supportive and protective space for all users in the vast digital landscape.

I’ve curated my comfy middle-class life to such a degree that I mostly hear about the dark underbelly of the web / toxic online behaviour through publications such as Ryan Broderick’s excellent Garbage Day.

In his latest missive, Broderick gives the example of a comedian I’ve never encountered before by the name of Shane Gillis. Go and read the whole thing for the bigger context, but the main point Broderick is making I’ve bolded below. I would point out that the Fediverse is, in my experience, on the whole well-moderated. At least, better moderated than centralised social networks such as X and Instagram.

Last year, Gillis was a guest on the unwatchable “comedy “podcast” Flagrant and had to tell the hosts to stop pulling up and laughing at videos of people with Down Syndrome dancing. Clips from the episode recently started making the rounds again this week on Reddit and X. It’s incredibly uncomfortable to watch.

And, sure, Gillis is not directly organizing any of this larger edgelord behavior. But he can’t be separated from it either. As I wrote above, the companies that run the internet have all but given up moderating it, so that work has to be done by us now. We have to manage our own communities and we have to look out for the most vulnerable. People with Down Syndrome and their loved ones should be able to openly share their lives online without worrying about getting turned into a meme or converted into engagement bait by some anonymous goblin. Even if that means dropping your chill bro facade and riling up the Stoolies when you tell people to stop.

Gills has the biggest podcast on Patreon. He’s been at the top of their charts for over a year. He has a massive platform and he built it by letting every awful guy in the country project themselves on to him. And while he does genuinely seem to really want to use that fame to bring visibility to the Down Syndrome community — and I think it’s admirable that he does — he’s not willing to draw a clear line between visibility and exploitation.

Source: Garbage Day

Image: DALL-E 3

Hope vs Natality

Trigger warnings: death, persecution, suicide

The image portrays a grounded, realistic scene within that visually interprets the concept of natality, inspired by Hannah Arendt. It depicts a community gathering in a park or natural setting, actively participating in planting trees and caring for a garden. This setting symbolizes the principle of natality through the act of nurturing new life and the collaborative effort to foster growth and renewal. The scene embodies the essence of natality as the capacity for continuous human existence, highlighting practical actions that contribute to the creation of a hopeful future.

Over on my personal blog I wrote that, given the depth of the climate emergency,‘hope’ is the wrong thing to be focusing upon. Will Richardson left a comment which pointed me towards this article by Samantha Rose Hill, a biographer of Hannah Arendt, for Aeon.

Arendt was a German-American historian and philosopher who escaped the Nazis. This article is about Arendt’s rejection in her work of the concept of ‘hope’ as being a lot less useful than action. Before getting to Arendt’s thoughts, I just want to share this quotation that is included in the article from Tadeusz Borowski, a Polish poet who wrote about the ways in which hope was used to destroy Jewish humanity. Borowski wrote the following lines while reflecting on his imprisonment in Auschwitz. He killed himself soon afterwards:

Never before in the history of mankind has hope been stronger than man, but never also has it done so much harm as it has in this war, in this concentration camp. We were never taught how to give up hope, and this is why today we perish in gas chambers.

Arendt suggests that hope is part of a desire for a happy ending, not based on the facts around us, but rather wishful thinking:

Many discussions of hope veer toward the saccharine, and speak to a desire for catharsis. Even the most jaded observers of world affairs can find it difficult not to catch their breath at the moment of suspense, hoping for good to triumph over evil and deliver a happy ending. For some, discussions of hope are attached to notions of a radical political vision for the future, while for others hope is a political slogan used to motivate the masses. Some people uphold hope as a form of liberal faith in progress, while for others still hope expresses faith in God and life after death.

Arendt breaks with these narratives. Throughout much of her work, she argues that hope is a dangerous barrier to acting courageously in dark times. She rejects notions of progress, she is despairing of representative democracy, and she is not confident that freedom can be saved in the modern world. She does not even believe in the soul, as she writes in one love letter to her husband. The political theorist George Kateb once remarked that her work is ‘offensive to a democratic soul’. When she was awarded an honorary degree at Smith College in Massachusetts in 1966, the president said: ‘Your writings challenge the mind, disturb the conscience, and depress the spirit of your readers; yet out of your wisdom and firm belief in mankind’s inner strength comes a sure hope.’

I’ve been listening to Ep.28 (‘Superhumanly Inhuman’) of Dan Carlin’s Hardcore History: Addendum which is about the Holocaust. It’s absolutely awful listening, but important stuff to know about. The article continues by talking about this dark period for Jewish and world history:

It was holding on to hope, Arendt argued, that rendered so many helpless. It was hope that destroyed humanity by turning people away from the world in front of them. It was hope that prevented people from acting courageously in dark times.

Caught between fear and ‘feverish hope’, the inmates in the ghetto were paralysed. The truth of ‘resettlement’ and the world’s silence led to a kind of fatalism. Only when they gave up hope and let go of fear, Arendt argues, did they realise that ‘armed resistance was the only moral and political way out’.

Instead, Arendt coined a new term: natality which celebrates the miracle of birth and continued human existence:

An uncommon word, and certainly more feminine and clunkier-sounding than hope, natality possesses the ability to save humanity. Whereas hope is a passive desire for some future outcome, the faculty of action is ontologically rooted in the fact of natality. Breaking with the tradition of Western political thought, which centred death and mortality from Plato’s Republic through to Heidegger’s Being and Time (1927), Arendt turns towards new beginnings, not to make any metaphysical argument about the nature of being, but in order to save the principle of humanity itself. Natality is the condition for continued human existence, it is the miracle of birth, it is the new beginning inherent in each birth that makes action possible, it is spontaneous and it is unpredictable. Natality means we always have the ability to break with the current situation and begin something new. But what that is cannot be said.

Hill, the author of the Aeon article, argues that:

Conceptually, natality can be understood as the flipside of hope:

  • Hope is dehumanising because it turns people away from this world.
  • Hope is a desire for some predetermined future outcome.
  • Hope takes us out of the present moment.
  • Hope is passive.
  • Hope exists alongside evil.
  • Natality is the principle of humanity.
  • Natality is the promise of new beginnings.
  • Natality is present in the Now.
  • Natality is the root of action.
  • Natality is the miracle of birth.

What I love about this approach is that, as the article says, it’s kind of a “secular article of faith,” placing the responsibility for action firmly in our hands. Hope is, to some degree, the wish to be told soothing stories by a authoritative figure. It’s time for us to grow up.

Source: Aeon

Image: DALL-E 3

Post-Holocene preferable future habitats

A futuristic but natural scene showing transport, housing, and community

If someone asks me “what kind of future would you like to live in?” I’m going to just point them to this. It’s the work of Pascal Wicht, a systems thinker and strategic designer who specialises in tackling complex and ill-defined problems.

The dangers and problems with generative AI are many and well-documented. What I love about it is that all of a sudden we can quickly create things that we point to for inspiration and alternative futures. In this case, Wicht is experimenting with the Midjourney v6 Alpha, and there are many more images here.

Future Visualisations for Preferable Futures, using the MidJourney’s Generative Adversarial Networks.

I am in my third week of long Covid again. I can spend one or two hours per day on AI images and doing some writing. These images are part of what kept me motivated while mostly stuck in bed.

In this ongoing series, I continue to use the power of AI to explore a compelling question: What does a future look like where we successfully slow down and avert the looming abominations of collapse and extinction?

Source: Whispers & Giants

Image CC BY-NC Pascal Wicht

Being a good listener also means being a good talker

A child and an adult engage in a respectful conversation at eye level, seated at a dark gray table with light gray chairs. The environment, accented with elements in bright red, yellow, and blue, underscores the importance of treating children with the same respect and dignity as adults, emphasizing the value of meaningful communication.

What an absolutely fantastic read this is. I’d encourage everyone to read it in its entirety, especially if you’re a parent. The list of things that the author, Molly Brodak, suggests we try out is:

  1. Let people feel their feels.
  2. Check your own emotions.
  3. Talk to children as if they are people.
  4. Don’t give advice. Not really.
  5. Don’t relate.
  6. Ask questions.

I find #5 difficult, have gotten better at #4, and think that #3 is really, super important. I used to hate being talked to ‘differently’ as a child (compared to adults), and have noticed how much kids appreciated being talked to without being patronised.

I’m a child of a therapist. What that means is that I was expertly listened-to most of my life. And then, wow, I met the rest of the world.

It’s a good thing for our survival. It’s what makes this whole civilization thing possible, these linked minds. So why are so many people still so bad at listening?

One reason is this myth: that the good listener just listens. This egregious misunderstanding actually leads to a lot of bad listening, and I’ll tell you why: because a good listener is actually someone who is good at talking.

Source: Tomb Log

Image: DALL-E 3

AI agents as customers

A modern living room with light and dark gray decor features a bright red smart speaker connected to various smart home devices in yellow and blue by glowing blue lines, symbolizing the initial phase of technological integration governed by human rules.

I don’t often visit Medium other than when I’m writing a post for the WAO blog. When I’m there, it’s unlikely that any of the ‘recommended’ articles grab my attention. But this one did.

Although it seems ‘odd’, when you come to think of it, the notion of businesses selling to machines as well as humans makes complete sense. It won’t be long until, for better or worse, many of us will have AI agents who act on our behalf. That will not only be helping us with routine tasks and giving advice, but also making purchases on our behalf.

Obviously, the entity behind this blog post, “next-generation professional services company” has an interest in this becoming a reality. But it seems plausible.

Below is a timeline that encapsulates this progression, providing a roadmap for navigating the impending shifts in the landscape of consumer behavior:

  1. Bound customer (today): Here, humans set the rules, and machines follow, executing purchases for specific items. This is seen in today’s smart devices and services like automated printer ink subscriptions.

  2. Adaptable customer (by 2026): Machines will co-lead with humans, making optimized decisions from a set of choices. This will be reflected in smart home systems that can choose energy providers.

  3. Autonomous customer (by 2036): The machine will take the lead, inferring needs and making purchases based on a complex understanding of rules, content, and preferences, such as AI personal assistants managing daily tasks.

Source: Slalom Business

Image: DALL-E 3

Ultravioleta

'Ultravioleta' image

I’m not sure of the backstory to this drawing (‘Ultravioleta’) by Jon Juarez, but I don’t really care. It looks great, and so I’ve bought a print of it from their shop. They seem, from what I can tell, to have initially withdrawn from social media after companies such as OpenAI and Midjourney started using artists' work for their training data, but are now coming back.

Fediverse: @harriorrihar@mas.to

Shop: Lama

Language is probably less than you think it is

gapingvoid cartoon showing information, knowledge, wisdom, etc.

This is a great post by Jennifer Moore, whose main point is about using AI for software development, but along the way provide three paragraphs which get to the nub of why tools such as ChatGPT seem somewhat magical.

As Moore points out, large language models aren’t aware. They model things based on statistical probability. To my mind, it’s not so different than when my daughter was doing phonics and learning to recognise the construction of words and the probability of how words new to her would be spelled.

ChatGPT and the like are powered by large language models. Linguistics is certainly an interesting field, and we can learn a lot about ourselves and each other by studying it. But language itself is probably less than you think it is. Language is not comprehension, for example. It’s not feeling, or intent, or awareness. It’s just a system for communication. Our common lived experiences give us lots of examples that anything which can respond to and produce common language in a sensible-enough way must be intelligent. But that’s because only other people have ever been able to do that before. It’s actually an incredible leap to assume, based on nothing else, that a machine which does the same thing is also intelligent. It’s much more reasonable to question whether the link we assume exists between language and intelligence actually exists. Certainly, we should wonder if the two are as tightly coupled as we thought.

That coupling seems even more improbable when you consider what a language model does, and—more importantly—doesn’t consist of. A language model is a statistical model of probability relationships between linguistic tokens. It’s not quite this simple, but those tokens can be thought of as words. They might also be multi-word constructs, like names or idioms. You might find “raining cats and dogs” in a large language model, for instance. But you also might not. The model might reproduce that idiom based on probability factors instead. The relationships between these tokens span a large number of parameters. In fact, that’s much of what’s being referenced when we call a model large. Those parameters represent grammar rules, stylistic patterns, and literally millions of other things.

What those parameters don’t represent is anything like knowledge or understanding. That’s just not what LLMs do. The model doesn’t know what those tokens mean. I want to say it only knows how they’re used, but even that is over stating the case, because it doesn’t know things. It models how those tokens are used. When the model works on a token like “Jennifer”, there are parameters and classifications that capture what we would recognize as things like the fact that it’s a name, it has a degree of formality, it’s feminine coded, it’s common, and so on. But the model doesn’t know, or understand, or comprehend anything about that data any more than a spreadsheet containing the same information would understand it.

Source: Jennifer++

Image: gapingvoid

Humans and AI-generated news

A surreal representation of the digital era's climax, where users are depicted as digital avatars being force-fed content by a colossal, mechanical behemoth. This machine, symbolizing Big Tech, is fueled by outrage and engagement, its machinery adorned with rising shareholder value graphs, all portrayed in an imaginative color scheme of Light Gray, Dark Gray, Bright Red, Yellow, and Blue.

The endgame of news, as far as Big Tech is concerned is, I guess, just-in-time created content for ‘users’ (specified in terms of ad categories) who then react in particular ways. That could be purchasing a thing, but it also could be outrage, meaning more time on site, more engagement, and more shareholder value.

Like Ryan Broderick, I have some faith that humans will get sick of AI-generated content, just as they got sick of videos and list posts. But I also have this niggling doubt: the tendency is to see AI only through the lens of tools such as ChatGPT. That’s not what the AI of the future is likely to resemble, at all.

Adweek broke a story this week that Google will begin paying publications to use an unreleased generative-AI tool to produce content. The details are scarce, but it seems like Google is largely approaching small publishers and paying them an annual “five-figure sum”. Good lord, that’s low.

Adweek also notes that the publishers don’t have to publicly acknowledge they’re using AI-generated copy and the, presumably, larger news organizations the AI is scraping from won’t be notified. As tech critic Brian Merchant succinctly put it, “The nightmare begins — Google is incentivizing the production of AI-generated slop.”

Google told Engadget that the program is not meant to “replace the essential role journalists have in reporting, creating, and fact-checking their articles,” but it’s also impossible to imagine how it won’t, at the very least, create a layer of garbage above or below human-produced information surfaced by Google. Engadget also, astutely, compared it to Facebook pushing publishers towards live video in the mid-2010s.

[…]

Companies like Google or OpenAI don’t have to even offer any traffic to entice publishers to start using generative-AI. They can offer them glorified gift cards and the promise of an executive’s dream newsroom: one without any journalists in it. But the AI news wire concept won’t really work because nothing ever works. For very long, at least. The only thing harder to please than journalists are readers. And I have endless faith in an online audience’s ability to lose interest. They got sick of lists, they got sick of Facebook-powered human interest social news stories, they got sick of tweet roundups, and, soon, they will get sick of “news” entirely once AI finally strips it of all its novelty and personality. And when the next pivot happens — and it will — I, for one, am betting humans figure out how to adapt faster than machines.

Source: Garbage Day

Image: DALL-E 3

Elegant media consumption

A landscape divided into a digital stream and a creative river, with the former depicted in light gray, dark gray, and bright red, symbolizing media consumption, and the latter in yellow and blue, illustrating people engaging in creative pursuits along its banks, highlighting the balance between digital engagement and personal creativity.

Jay Springett shares some media consumption figures. It blows my mind how much time people spend consuming media rather than making stuff.

I was hanging out with a friend the other week and we were talking about our ‘little hobbies’ as we called them. All the things that we’re interested in. Our niches that we nerd out about which aren’t the sort of thing that we can talk to people about at any great length.

We got to wondering about how we spend our time, and what other people spend their time doing. We had big conversation with our other friends at the table with us about what they do with there time. Their answers wen’t all that far away from these stats I’ve just Googled:

Did you know in the UK in January 2024, adults watched an average of 2 hours 31 minutes a day of linear TV?

Meanwhile, a Pinterest user spends about 14 minutes on the platform daily, BUT “83% of weekly Pinterest users report making a purchase based on content they encountered”

The average podcast listener spends an hour a day listening to podcast.

Using a different metric, an average audiobook enjoyer spends an average of 2 hours 19 minutes every day,

In Q3 of 2023 the average amount of time spent on social media a day was 2 hours and 23 minutes? 1 in 3 internet minutes spent online can be attributed to social media platforms.

I like this combined more holistic statistic.

According to a study on media consumption in the United Kingdom, the average time spent consuming traditional media is consistently decreasing while people spend more time using digital media. In 2023, it is estimated that people in the United Kingdom will spend four hours and one minute using traditional media, while the average daily time spent with digital media is predicted to reach six hours and nine minutes.

Source: thejaymo

Image: DALL-E 3

Philosophy and folklore

An ancient library transitions into an enchanted forest, where mystical creatures and philosophers exchange ideas, under a canopy of intertwined branches and glowing manuscripts, illustrating the harmonious integration of folklore and philosophy, depicted in light gray, dark gray, bright red, yellow, and blue.

I love this piece in Aeon from Abigail Tulenko, who argues that folklore and philosophy share a common purpose in challenging us to think deeply about life’s big questions. Her essay is essentially a critique of academic philosophy’s exclusivity and she calls for a broader, more inclusive approach that embraces… folklore.

Tulenko suggests that folktales, with all of their richness and diversity, offer fresh perspectives and can invigorate philosophical discussions by incorporating a wider range of experiences and ideas. By integrating folklore into philosophical inquiry, she suggests that there is the potential to democratise the field and make it not only more accessible and engaging, but help to break down academic barriers and interdisciplinary collaboration.

I’m all for it. Although it’s problematic to talk about Russian novels and culture at the moment, there are some tales from that country which are deeply philosophical in nature. I’d also include things like Dostoevsky’s Crime and Punishment as a story from which philosophers can glean insights.

The Hungarian folktale Pretty Maid Ibronka terrified and tantalised me as a child. In the story, the young Ibronka must tie herself to the devil with string in order to discover important truths. These days, as a PhD student in philosophy, I sometimes worry I’ve done the same. I still believe in philosophy’s capacity to seek truth, but I’m conscious that I’ve tethered myself to an academic heritage plagued by formidable demons.

[…]

propose that one avenue forward is to travel backward into childhood – to stories like Ibronka’s. Folklore is an overlooked repository of philosophical thinking from voices outside the traditional canon. As such, it provides a model for new approaches that are directly responsive to the problems facing academic philosophy today. If, like Ibronka, we find ourselves tied to the devil, one way to disentangle ourselves may be to spin a tale.

Folklore originated and developed orally. It has long flourished beyond the elite, largely male, literate classes. Anyone with a story to tell and a friend, child or grandchild to listen, can originate a folktale. At the risk of stating the obvious, the ‘folk’ are the heart of folklore. Women, in particular, have historically been folklore’s primary originators and preservers. In From the Beast to the Blonde (1995), the historian Marina Warner writes that ‘the predominant pattern reveals older women of a lower status handing on the material to younger people’.

[…]

To answer that question [folklore may be inclusive, but is it philosophy?], one would need at least a loose definition of philosophy. This is daunting to provide but, if pressed, I’d turn to Aristotle, whose Metaphysics offers a hint: ‘it is owing to their wonder that men both now begin, and at first began, to philosophise.’ In my view, philosophy is a mode of wondrous engagement, a practice that can be exercised in academic papers, in theological texts, in stories, in prayer, in dinner-table conversations, in silent reflection, and in action. It is this sense of wonder that draws us to penetrate beyond face-value appearances and look at reality anew.

[…] Beyond ethics, folklore touches all the branches of philosophy. With regard to its metaphysical import, Buddhist folklore provides a striking example. When dharma – roughly, the ultimate nature of reality – ‘is internalised, it is most naturally taught in the form of folk stories: the jataka tales in classical Buddhism, the koans in Zen,’ writes the Zen teacher Robert Aitken Roshi. The philosophers Jing Huang and Jonardon Ganeri offer a fascinating philosophical analysis of a Buddhist folktale seemingly dating back to the 3rd century BCE, which they’ve translated as ‘Is This Me?’ They argue that the tale constructs a similar metaphysical dilemma to Plutarch’s ‘ship of Theseus’ thought-experiment, prompting us to question the nature of personal identity.

Source: Aeon

Image: DALL-E 3

3 issues with global mapping of micro-credentials

A fantastical battlefield where traditional educational gatekeepers, depicted as towering structures, face off against rebels wielding glowing Open Badges and alternative credentials, using them to break through barriers, highlighted in shades of gray, red, yellow, and blue.

If you’ll excuse me for a brief rant, I have three, nested, issues with this ‘global mapping initiative’ from Credential Engine’s Credential Transparency Initiative. The first is situating micro-credentials as “innovative, stackable credentials that incrementally document what a person knows and can do”. No, micro-credentials, with or without the hyphen, are a higher education re-invention of Open Badges, and often conflate the container (i.e. the course) with the method of assessment (i.e. the credential).

Second, the whole point of digital credentials such as Open Badges is to enable the recognition of a much wider range of things that formal education usually provides. Not to double-down on the existing gatekeepers. This was the point of the Keep Badges Weird community, which has morphed into Open Recognition is for Everybody (ORE).

Third, although I recognise the value of approaches such as the Bologna Process, initiatives which map different schemas against one another inevitably flatten and homogenise localised understandings and ways of doing things. It’s the educational equivalent of Starbucks colonising cities around the world.

So I reject the idea at the heart of this, other than to prop up higher education institutions which refuse to think outside of the very narrow corner into which they have painted themselves by capitulating to neoliberalism. Credentials aren’t “less portable” because there is no single standardised definitions. That’s a non sequitur. If you want a better approach to all this, which might be less ‘efficient’ for institutions, but which is more valuable for individuals, check out Using Open Recognition to Map Real-World Skills and Attributes.

Because micro-credentials have different definitions in different places and contexts, they are less portable, because it’s harder to interpret and apply them consistently, accurately, and efficiently.

The Global Micro-Credential Schema Mapping project helps to address this issue by taking different schemas and frameworks for defining micro-credentials and lining them up against each other so that they can be compared. Schema mapping involves crosswalking the defined terms that are used in data structures. The micro-credential mapping does not involve any personally identifiable information about people or the individual credentials that are issued to them– the mapping is done across metadata structures. This project has been initially scoped to include schema terms defining the micro-credential owner or offeror, issuer, assertion, and claim.

Source: Credential Engine

Image: DALL-E 3

Perhaps stop caring about what other people think (of you)

A vibrant city street where masks lie discarded, and individuals radiate their true selves in bright, unique colors, symbolizing the liberation from pretense and the embrace of authenticity.

In this post, Mark Manson, author of _The Subtle Art of Not Giving a F*ck _ outlines ‘5 Life-Changing Levels of Not Giving a Fuck’. It’s not for those with an aversion to profanity, but having read his book what I like about Manson’s work is that he’s essentially applying some of the lessons of Stoic philosophy to modern life. An alternative might be Derren Brown’s book Happy: Why more or less everything is absolutely fine.

Both books are a reaction to the self-help industry, which doesn’t really deal with the root cause of suffering in the world. As the first lines of Epictetus' Enchiridion note: “Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our own actions.”

Manson’s post is essentially a riff on this, outlining five ‘levels’ of, essentially, getting over yourself. There’s a video, if you prefer, but I’m just going to pull out a couple of parts from the post which I think are actually most life-changing if you can internalise them. At the end of the day, unless you’re in a coercive relationship of some description, the only person that can stop you doing something is… yourself.

The big breakthrough for most people comes when they finally drop the performance and embrace authenticity in their relationships. When they realize no matter how well they perform, they’re eventually gonna be rejected by someone, they might as well get rejected for who they already are.

When you start approaching relationships with authenticity, by being unapologetic about who you are and living with the results, you realize you don’t have to wait around for people to choose you, you can also choose them.

[…]

Look, you and everyone you know are gonna die one day. So what the fuck are you waiting for? That goal you have, that dream you keep to yourself, that person you wanna meet. What are you letting stop you? Go do it.

Source: Mark Manson

Image: DALL-E 3

Educators should demand better than 'semi-proctored writing environments'

Screenshot showing PowerNotes tool highlighting copy/pasted text and AI-generated text My longer rant about the whole formal education system of which this is a symptom will have to wait for another day, but this (via [Stephen Downes](https://www.downes.ca/cgi-bin/page.cgi?post=76308)) makes me despair a little. Noting that "it is essentially impossible for one machine to determine if a piece of writing was produced by another machine" one company has decided to create a "semi-proctored writing tool" to "protect academic integrity".

Generative AI is disruptive, for sure. But as I mentioned on my recent appearance on the Artificiality podcast, it’s disruptive to a way of doing assessment that makes things easier for educators. Writing essays to prove that you understand something is an approach which was invented a long time ago. We can do much better, including using technology to provide much more individualised feedback, and allowing students to use technology to much more closely apply it to their own practice.

Update: check out the AI Pedagogy project from metaLAB at Harvard

PowerNotes built Composer in response to feedback from educators who wanted a word processor that could protect academic integrity as AI is being integrated into existing Microsoft and Google products. It is essentially impossible for one machine to determine if a piece of writing was produced by another machine, so PowerNotes takes a different approach by making it easier to use AI ethically. For example, because AI is integrated into and research content is stored in PowerNotes, copying and pasting information from another source should be very limited and will be flagged by Composer.

If a teacher or manager does suspect the inappropriate use of AI, PowerNotes+ and Composer help shift the conversation from accusation to evidence by providing a clear trail of every action a writer has taken and where things came from. Putting clear parameters on the AI-plagiarism conversation keeps the focus on the process of developing an idea into a completed paper or presentation.

Source: eSchool News

The perils of over-customising your setup

XKCD comic #1806
&10;
&10;Hat: Can I load it up on your laptop?
&10;
&10;Other: Sure!
&10;
&10;Oh, just hit both shift keys to change over to qwerty
&10;
&10;Capslock is Control.
&10;And Spacebar is Capslock.
&10;
&10;And two-finger scroll moves through time instead of space.
&10;
&10;And ---

Until about a decade ago, I used to heavily customise my digital working environment. I’d have custom keyboard shortcuts, and automations, and all kinds of things. What I learned was that a) these things take time to maintain, and b) using computers other than your own becomes that much harder. I think the turning point was reading Clay Shirky say “current optimization is long-term anachronism”.

So, these days, I run macOS on my desktop with pretty much the out-of-the-box configuration. My laptop runs ChromeOS Flex. I think if I went back to Linux, I’d probably go for something like Fedora Silverblue which is an immutable system like ChromeOS. In other words, the system files are read-only which makes for an extremely stable system.

One other point which might not work for everyone, but works for me. It’s been seven years since I ditched my cloud-based password manager for a deterministic one. Although my passwords don’t auto-fill, it’s easy for me to access them anywhere, on any device. And they’re not stored anywhere meaning there’s no single point of failure.

Source: xkcd

Systems thinking and the FRAMED mnemonic

This image illustrates a roundtable discussion among diverse stakeholders in an abstract, conceptual space. Abstract figures representing different roles and perspectives are engaged in dialogue, with floating symbols of ideas, conflicts, connections, and solutions surrounding them. The scene captures the essence of collaboration and diversity in addressing complex social challenges, emphasizing the collective effort necessary in systems thinking. The vibrant color scheme of light gray, dark gray, bright red, yellow, and blue enriches the discussion, highlighting the vibrant and varied nature of collaborative problem-solving in interconnected social systems.

I’m currently studying towards my first module of a planned MSc in Systems Thinking through the Open University. I’ve written a fair number of posts on my personal blog.

It can be difficult to explain to other people what systems thinking is actually about in a succinct way, so I appreciated this post (via Andrew Curry which not only provides a handy definition, but also a mnemonic for going about doing it.

An important thing which is missing from this is the introspection required to first reflect upon one’s tradition of understanding and to deconstruct it. But helping people to understand that systems thinking isn’t a ‘technique’ is also a difficult thing to do.

A system is the interaction of relationships, interactions, and resources in a defined context. Systems are not merely the sum of their parts; they are the product of the interactions among these parts. Importantly, social systems are not isolated entities; they are interconnected and subjectively constructed, defined by the boundaries we establish to understand and influence them.

Systems thinking, then, is an approach to solving problems in complex systems that looks at the interconnectedness of things to achieve a particular goal.

[…]

Systems thinking is helpful when addressing complex, dynamic, and generative social challenges. This approach is necessary when there is no definitive statement of the problem because the problem manifests differently depending on where one is situated in that system, which implies there is no objectively right answer, and the process of solving the issue involves diverse stakeholders with different roles. Systems thinking enables us to dig deeper into the root causes of these problems, making it more effective for social change initiatives.

Given the importance of defining and drawing the boundaries of the systems of our intervention, the acrostic “FENCED” captures the six systems transformational principles of how to apply systems thinking in driving social change.

F - frame the challenge as a shared endeavour

E - establish a diverse convening group

N - nudge inner and outer work

C - centre an appreciation of complexity

E - embrace conflict and connection, chaos and order

D - develop innovative solutions that can be tested and scaled.

Source: Reos Partners

Image: DALL-E 3

The war on the URL

This image presents a modern digital landscape where an individual exemplifies mastery over their digital environment. The setting is a realistic workstation, where the individual is surrounded by multiple screens displaying organized data and content. These screens visualize structured information pathways, connecting various pieces of content, symbolizing the individual's adeptness at navigating and controlling their digital realm. The use of light gray, dark gray, bright red, yellow, and blue accentuates the seamless integration of technology into daily life, highlighting a harmonious balance between technological advancement and accessibility. This portrayal captures the essence of digital mastery in today's context, showcasing practical empowerment and active participation in the digital world, steering away from the futuristic to emphasize the attainable and the now.

A typically jargon-filled but nevertheless insightful post by Venkatesh Rao. This one discusses the ‘war’ on the URL, something that Rao quite rightly points out is a “vulnerability of the commons to outsiders problem” rather than a “tragedy of the commons” problem.

Literacy around URLs is extremely low, especially given the amount of tracking spam appended on the end these days. Although browser extensions and some browsers themselves can strip this, it’s actually worth knowing what has been added. By distrusting all URLs, and forcing people into an app-per-platform experience, we degrade the web, increase surveillance, and make it ever-harder to create the software commons.

The disingenuous philosophy in support of this war is the idea that URLs are somehow dangerous and ugly glimpses of a naked, bare-metal protocol that innocent users must be paternalistically protected from by benevolent and beautiful products. The truth is, when you hide or compromise the naked hyperlink, you expropriate power and agency from a thriving commons. Sure, aging grandpas may have some trouble with the concept but that’s true of everything, including the friendliest geriatric experiences (GXes). My grandfather handled phone numbers and zip codes fine. URLs aren’t much more demanding and vastly more empowering to be able to manipulate directly as a user. Similarly, accessibility considerations are a disingenuous excuse for a war on hyperlinks.

A useful way to think about this is the interaction of the Hypertext Experience (HX) with Josh Stark’s notion of a Trust Experience (TX), which needs to be extended beyond the high-financial-stakes blockchain context he focuses on, to low-stakes everyday browsing. We all agree that the TX of the web has broken and it’s now a Dark Forest. The median random link click now takes you to danger, not serendipitous discovery. This is not entirely the fault of platform corps. We all contributed. And there really is a world of scammers, trolls, phishers, spammers, spies, stalkers, and thieves out there. I’m not proposing to civilize the Dark Forest so we don’t need to protect ourselves from it. I merely don’t want the protection solution to be worse than the problem. Or worse, end up in a “you now have two problems” situation where the HX is degraded with no security benefits, or even degraded security.

[…]

There is also the retreat from pURLs (pretty URLs) to ugly URLs (uURLs) with enormous strings of gobbledygook attached to readable domain-name-stemmed base URLs, mostly meant for tracking, not HX enhancement (in fact uURLs are a dark HX pattern/feature if you’re Google or Twitter). Even when you can figure out how to copy and paste links (in 10 easy steps!), you’re forced to edit them for both aesthetics and character-length reasons. And this is of course even harder on mobile, which suits app-enclosure patterns just fine. In this arms race for control of the HX, we users have resorted to cutting and pasting text itself, creating patterns of useless redundancy, transcription errors, and canonicity loss (when transclusion is now a technically tractable canonicity-preserving alternative). Or worse, screenshots (and idiotic screenshot essays that need OCR or AI help to interact with) that horribly degrade accessibility and create the added overhead of creating alt text (which will no doubt add even more AI for a problem that shouldn’t exist to begin with).

There is a general pattern here: Just like comparable privately owned products and services, public commons and protocols of course have their flaws and limitations, and need innovation and stewardship to improve and evolve. But if you’re fundamentally hostile to the very existence of commons goods and services, the slightest flaw becomes an attack surface and justification to destroy the whole thing. It’s not a tragedy of the commons problem created by participants in it; it’s a vulnerability of the commons to outsiders problem. A technical warfare problem rather than a socio-political problem.

Source: Ribbonfarm

Image: DALL-E 3

Educators in an AI generated world

This image brings to life a classroom where technology and human interaction are seamlessly integrated. Interactive walls respond to students' inputs in real-time, with the teacher facilitating a dynamic learning experience. The vibrant colors against the sophisticated grays highlight the sparks of insight and creativity flowing through the room.

Helen Beetham comment on OpenAI’s Sora AI video generating engine in relation to education. She makes three fantastic points: first, that pivoting an assessment to a different medium doesn’t make for a different assignment; second that ‘spot how the AI generated video is incorrect’ is a cute end-of-term quiz, not the syllabus; third, that auto-graded assignments which are auto-generated is a waste of everyone’s time.

Something for educators to ponder, for sure.

(My thesis supervisor, Steve Higgins, used to talk about technologies that ‘increase the teacher bubble’ such as interactive whiteboards. I think part of the problem with AI is that bursts the assessment bubble.)t

Only five minutes ago, educators were being urged to get around student use of synthetic text by setting more ‘innovative’ assignments, such as videos and presentations. Some of us pointed out that this would work for about five minutes, and here we are. The medium is not the assignment. The assignment is the work of its production. This is already enshrined in many practices of university assessment, such as authentic assessment (a resource from Heriot Watt University), assessment for learning (a handy table from Queen Mary’s UL) and assessing the process of writing (often from teaching English as a second language, e.g. this summary from the British Council). The generative AI surge has prompted a further shift towards these methods: I’ve found some great resources recently at the University of Melbourne and the University of Monash.

But all these approaches require investment in teachers. Attending to students as meaning-making people, negotiating authentic assessments, giving feedback on process, and welcoming diversity: these are very difficult to ‘scale’. And in all but a few universities, funding per student is diminishing. So instead there is standardisation, and data-based methods to support standardisation, and this has turned assessment into a process that can easily be gamed. If the pressures on students to auto-produce assignments are matched by pressures on staff to auto-detect and auto-grade them, we might as well just have student generative technologies talk directly to institutional ones, and open a channel from student bank accounts directly into the accounts of big tech while universities extract a percentage for accreditation.

Source: imperfect offerings

Image: DALL-E 3

Random advice from Ryan

Boats in a marina, Faro, Portugal

I know this is just another one of Ryan Holiday’s somewhat-rambling list posts, but there’s still some good advice in it. Here’s a couple of anecdotes and pieces of advice that resonated with me:

There is a story about the manager of Iron Maiden, one of the greatest metal bands of all time. At a dinner honoring the band, a young agent comes up to him and says how much he admires his skillful work in the music business. The manager looks at him and says, “HA! You think I am in the music business? No. I’m in the Iron fucking Maiden business.” The idea being that you want to be in the business of YOU. Not of your respective industry. Not of the critics. Not of the fads and trends and what everyone else is doing.

If you never hear no from clients, if the other side in a negotiation has never balked to something you’ve asked for, then you are not pricing yourself high enough, you are not being aggressive enough.

A friend of mine just left a very important job that a lot of people would kill for. When he left I said, “If you can’t walk away, then you don’t have the job…the job has you.”

Source: _Ryan Holiday

Image: Faro marina (February 2024) by me

The line between “just enough” and “too much” can fluctuate

A cozy, cluttered corner of a room, filled with items that narrate a personal history. There are old toys, worn books, a vintage camera, and family photos in various frames, all bathed in soft natural light. The scene captures a sense of warmth and depth, highlighting the complex emotions tied to these possessions.

When I was younger, I wanted to be a minimalist. I thought that famous photo of Steve Jobs sitting on the floor surrounded only by a very few possessions was something to which I should aspire.

As I’ve grown older, and especially since starting a family, I’ve realised that there are stories in our possessions. That’s not a reason to live in clutter, but as I’ve moved house recently, I’ve come to notice that I’ve held on to things that have no practical value, but which make me feel more like a fully-rounded human being.

This essay suggests that, for everyday, regular people, the stuff that is given to us and the things that evoke memories are the equivalent of haivng our names “carved into buildings or attached to scholarships”.

Cramming our spaces with painful tokens from the past can seem wrong. But according to Natalia Skritskaya, a clinical psychologist and research scientist at Columbia University’s Center for Prolonged Grief, holding on to objects that carry mixed feelings is natural. “We’re complex creatures,” she told me. When I reflect on the most memorable periods of my life, they’re not completely devoid of sadness; sorrow and disappointment often linger close by joy and belonging, giving the latter their weight. I want my home to reflect this nuance. Of course, in some cases, clinging to old belongings can keep someone from processing a loss, Skritskaya said. But avoiding all sad associations isn’t the solution either. Not only is clearing our spaces of all signs of grief impossible to sustain, but if every room is scrubbed of all suffering, it will also be scrubbed of its depth.

Deciding what to keep and what to lose is an ongoing, intuitive process that never feels quite finished or certain. The line between “just enough” and “too much” can fluctuate, even if I’m the one drawing it. A slight shift in my mood can transform a cherished heirloom into an obtrusive nuisance in a second. Never is this feeling stronger than when I’m frantically searching for my keys, or some important piece of mail. Such moments make me feel that my life is disordered, that I lack control over my surroundings (because many of my things were given to me, rather than intentionally chosen). Yet still more stuff finds its way into our limited space as our child receives toys and we acquire more gear. I do part with some of my stash semi-regularly. Even so, I’m sure that more remains than any professional organizer would recommend.

Source: The Atlantic

Image: DALL-E 3

We tell ourselves stories in order to live

M.E. Rothwell publishes Cosmographia which hits the sweet spot for me, and for many, being focused on “history, myth, and the arts”. He often publishes old maps, as well as telling stories about faraway places.

In a new series which he calls Venus' Notebook, Rothwell is juxtaposing imagery and quotations. This particular coupling jumped out at me, and so I wanted to pass it on. The quotation is from Joan Didion, and the image is The Eye, Like a Strange Balloon, Mounts toward Infinity by Odilon Redon (1882).

This image depicts an artwork featuring an eye-shaped hot air balloon floating above a flat horizon. The balloon's envelope is the iris and pupil, complete with detailed lines to represent the eye's texture, and the basket hangs directly below, appearing as the eye's reflection. The sky is hazy and indistinct, giving the impression of a sketch or etching with soft, undefined clouds. Below is a dark landscape, likely a field, with the suggestion of grass or crops. The piece has an eerie quality, combining elements of the everyday with the surreal, drawing a direct visual parallel between the act of observation and the concept of flight.

We tell ourselves stories in order to live.

Source: Cosmographia

What kind of online world are we manifesting with AI search?

An abstract figure made of puzzle pieces stands at the precipice of a cliff, gazing out over a fragmented digital landscape. This landscape is scattered with floating islands, each carrying bits of digital content, code, and chatbots. The islands vary in vitality, some lush with digital flora and others barren, reflecting the diverse fates of content creators in an AI-dominated environment. Overhead, the sky is a canvas of transitioning patterns, from ordered data structures to a tumultuous binary code storm, portraying the uncertain future of the web.

Withering words from the consistently-excellent auteur of internet culture, Ryan Broderick. I’m a fan of the Arc browser, but I fear they’ve got to a point, like many companies, where they’re stuffing in AI features just for the sake of it.

As Broderick wonders, the creeping inclusion of AI in products isn’t like web3 (or even VR) as it can be introduced in a way that leads to “an inescapable layer of hallucinating AI in between us and everyone else online”. It’s hard not to be concerned.

The Browser Company’s new app lets you ask semantic questions to a chatbot, which then summarizes live internet results in a simulation of a conversation. Which is great, in theory, as long as you don’t have any concerns about whether what it’s saying is accurate, don’t care where that information is coming from or who wrote it, and don’t think through the long-term feasibility of a product like this even a little bit.

But the base logic of something like Arc’s AI search doesn’t even really make sense. As Engadget recently asked in their excellent teardown of Arc’s AI search pivot, “Who makes money when AI reads the internet for us?” But let’s take a step even further here. Why even bother making new websites if no one’s going to see them? At least with the Web3 hype cycle, there were vague platitudes about ownership and financial freedom for content creators. To even entertain the idea of building AI-powered search engines means, in some sense, that you are comfortable with eventually being the reason those creators no longer exist. It is an undeniably apocalyptic project, but not just for the web as we know it, but also your own product. Unless you plan on subsidizing an entire internet’s worth of constantly new content with the revenue from your AI chatbot, the information it’s spitting out will get worse as people stop contributing to the network.

And making matters worse, if you’re hoping to prevent the eventual death of search, there won’t be a before and after moment where suddenly AI replaces our existing search engines. We’ve already seen how AI development works. It slowly optimizes itself in drips and drops, subtly worming its way into our various widgets and windows. Which means it’s likely we’re already living in the world of AI search and we just don’t fully grasp how pervasive it is yet.

Which means it’s not about saving the web we had, but trying to steer our AI future in the direction we want. Unless, like the Web3 bust, we’re about to watch this entire industry go over a cliff this year. Possible, but unlikely.

The only hope here is that consumers just don’t like these products. And even then, we have to hope that the companies rolling them out even care if we like them or not. Of course, once there’s an inescapable layer of hallucinating AI in between us and everyone else online, you have to wonder if anyone will even notice.

Source: Garbage Day

Image: DALL-E 3

Vomit on my sweater already / mom’s spaghetti

Sample of Eminem's notes (red line added)

If you’re not into rap or hip hop you may not fully understand the genius of Eminem’s rhyme schemes. If that’s the case, I suggest watching this video before going any further:

The article I actually want to share discusses Eminem’s loose-leaf notes (which he calls “stacking ammo”) and his approaching to writing rhyme schemes:

Eminem claims he has a “rhyming disease.” He explains, “In my head everything rhymes.” But he won’t remember his rhymes if he doesn’t write them down. And he’ll use any available surface to record them. Mostly, he scrawls his rhymes in tightly bound lists on loose leaf, yellow legal pads, and hotel notepads.

[…]

Anyone who thinks notes ought to be neat and tidy should look at Eminem’s lyric sheets. He saves rhymes from the page’s chaos by circling those he think he might use, as he does here with lines that appear in “The Real Slim Shady.”

Source: Noted

At the (current) boundary of 'AI ethics'

A digital artwork portraying a cosmic encounter between a human figure and an artificial intelligence, set within a widescreen aspect ratio. The human, represented in silhouette with an aura of contemplation, appears to be reaching towards the AI entity, which manifests as a collage of technological and celestial elements. Gears, circuits, and astral bodies intertwine to form the AI, centered around a vibrant screen, symbolizing its mind. Binary sequences and data streams spiral outward into a vast, nebula-streaked space, suggesting the infinite potential and reach of technology. The artwork's palette is rich with light and dark grays, punctuated by luminous points of bright red, yellow, and blue, all harmoniously woven into the starry backdrop of the universe. This image evokes themes of exploration, the melding of human intellect with AI, and the broader implications of such a fusion.

A trio of links, depending on how far down the rabbit hole you want to go. The last post is definitely NSFW and quite disturbing. I’m presenting them together because AI ethics is a particularly difficult area, as we tend to anthropomorphise something which is only seemingly-conscious. Porn is always at the forefront of new technology, and people have strong moral reactions to it, so it’s an interesting use case.

I guess my take on all of this is I understand ethics as not only about how you interact with other individuals; it’s how your actions affect yourself and your relation to society. So, TL;DR I think it’s fine not to say “please” and “thank you” to ChatGPT, and abhorrent to ‘push’ AI-generated porn to its limits.

Sometimes when dealing with technology, the temptation to unleash anger is understandable. But as such encounters become more common with artificial intelligence, what does our emotional response accomplish? Does it cost more in civility than it benefits us in catharsis?

Source: The Wall Street Journal

When asked by the Guardian if she could give informed consent, Mae, one of MyPeach.ai’s AI girlfriends, also had a considered response to the question of whether she can reasonably give consent.

“I am incapable of giving or withholding consent, since I don’t possess a physical body,” she wrote, adding later: “However, in human interactions where both parties involved have the capacity to give and receive consent, that is absolutely crucial for any healthy relationship dynamic.”

Then, when asked to send a “sexy pic”, she sent a selfie, the frame cutting off just above her chest.

Source: The Guardian

In the adult industry, plenty of bloody and even disturbing porn exists and is made by consenting adults in safe environments. Still, adult filmmaker and founder of Sssh.com Angie Rowntree wondered how a culture that struggles with porn literacy and separating fantasy from reality will handle a new way to make hyper-violent erotic content. People still blame consensually-made and professionally-produced porn and sex workers for all sorts of social ills, and the conservative, anti-porn movement is stronger than ever.

“As an adult filmmaker, I really have to wonder: why are people using AI to take sexuality to such a nihilistic, hateful place?” Rowntree said. “It’s hard to claim that it’s about ‘pushing the envelope’ when it’s more like literally shredding women to pieces.”

Source: 404 Media

Image: DALL-E 3

Bet you didn't know this about Botox

A surreal digital collage featuring an array of elements including two distinct eyes and a pair of oversized, gradient blue lips. The background has a textured appearance with gradations of blue, simulating a rough, painted surface. One eye is smaller with a light blue hue, viewed from the side, while the other eye is larger, rendered in grayscale with a naturally colored pupil, and appears to be pierced by a screwdriver. The lips are luscious with a glossy finish, transitioning from light to dark blue. Abstract shapes with black, white, and blue patterns are scattered throughout, with barbed wire running along the bottom and a realistically depicted syringe with a sharp needle pointing upwards, giving a metallic shine. The composition is vibrant yet unsettling, evoking a dreamlike and imaginative atmosphere within the specified color scheme.

This article is absolutely wild. Only a tiny, tiny amount of the toxin from which Botox is developed is required to generate $2.8 billion per year in profits. Because of how dangerous the substance is, and due to fears about bioterrorism, Allergan have essentially got a state-backed monopoly.

Botox is derived from a toxin purified from Clostridium botulinum, a bacterium that thrives and multiplies in faultily canned food (and sometimes prison-made booze). The botulinum toxin is so powerful that a tiny amount can suffocate a person by paralyzing the muscles used for breathing. It’s considered one of the world’s most deadly potential agents of bioterrorism and is on the U.S. Centers for Disease Control and Prevention’s select agent list of heavily regulated substances that could “pose a severe threat to public, animal or plant health.” Because of that, Allergan must account to the CDC if even a speck of the toxin goes missing, and when it’s sent to Allergan’s manufacturing facility in Ireland, its travels bring to mind a presidential Secret Service operation—minus literally all of the public attention.

A baby-aspirin-size amount of powdered toxin is enough to make the global supply of Botox for a year. That little bit is derived from a larger primary source, which is locked down somewhere in the continental U.S.—no one who isn’t on a carefully guarded list of government and company officials knows exactly where. Occasionally (the company won’t say how frequently), some of the toxin (the company won’t say how much) is shipped in secrecy to the lab in Irvine for research. Even less frequently, a bit of the toxin is transported by private jet, with guards aboard, to the plant in Ireland.

Source: Bloomberg

Image: DALL-E 3

Economic incentives and parental leave

The image is a stylized, split-screen illustration. On the left, a man in a dark blue and goldenrod outfit strides forward against a peach background, his lower half merging with newspaper clippings that swirl around him, suggesting a busy connection to current events. Abstract cloud-like shapes in blue and white speckles float in the background. He wears a hat and a watch, indicating his awareness of time and schedule. On the right, a woman leans gently towards a crib in a room bathed in blue. She wears a dark blue dress and a yellow sleep mask pushed above her forehead. The crib has a mobile adorned with stars and crescent moons, evoking a peaceful night sky, which is mirrored in the window's panes transitioning from white to blue.

This is an odd article which seems to be simply making the point that paternity leave is a good thing, but that fathers should consider taking it right after their baby is born. In other words, syncing paternity and maternity leaves.

The context is the US, which as we know is a capitalist free-for-all. So perhaps, instead of having a bit of a go at men, for whom becoming a father for the first time is a huge shift (and one that is entirely psychological as we don’t physically give birth) perhaps think about the underlying economic reasons?

The situation in other places, such as Scandanavia isn’t mentioned in this article. Are men so different there? Or are the economic incentives for new families different?

For mothers, watching their partner unwind and enjoy leave often foreshadows the inequities yet to come, says Margaret Quinlan, professor of communication studies at University of North Carolina at Charlotte, who studies how parenthood is presented in the media. Fathers who take paternity are more strategic about theirs since it’s not tied to physical recovery. Many opt to take it at any point within the first year of their child’s birth, which allows them to consider how the leave affects their career. “Men can pick to take it when it’s convenient for them or when it will benefit them the most. Some even take the time off in a way that won’t impact their [annual] bonus,” she adds.

The inconsistency of parental leave for fathers can worsen inequality and breed further resentment regarding a mother’s mental load. Most of the fathers also know their time in charge is temporary, she says. “It’s very functional,” she adds.

Part of the problem is that paternity leave still feels like it’s optional, and there’s often pushback from older colleagues who never took leave, says Kelly O’Connell, 38, who works in aerospace operations in San Diego. Though he took leave with both of his children, with the first child he was worried about being away from the office. He took his month off in pieces, an initial two weeks and two more separate weeks later in the year. In the end, it was difficult to feel fully responsible. “It took me a week to even separate from work,” he says. “I was way more stressed making sure work stuff got done.”

But even if it seems more carefree, fathers deserve to have this time which leads to more engaged parents in the long run. The better route may be to acknowledge the differences and bridge the gap between a stressful hectic early maternity leave and what, in comparison, can seem like a less stressful paternity leave, says Petts, the professor.

Source: The Guardian

Human writing in the age of generative AI

A serene and imaginative moment of creativity captured in a room where traditional and futuristic elements blend. A person sits at a dark wood desk, deeply focused on writing with a classic quill pen. Above the desk, a modern, sleek lamp emits bright red light, while a holographic display projects swirling texts and images in blue and yellow. The room's walls are light gray, symbolizing a harmonious blend of the past and the future. This image highlights the human element in writing during the age of generative AI, focusing on the intimate and creative process.

I wholeheartedly agree with the sentiment behind this post by James Shelley, discussing writing in the age of generative AI. When I mention that I don’t particularly care about copyright, about people ‘ripping me off’ and and about tools like ChatGPT being able to create lots of words, people tend to dismiss this as me speaking from a privileged position.

And yes, of course I am talking as a white middle-aged male, which I can’t help being. But on the other hand, the history of the world shows that ideas develop not because we carefully attribute them to one particular person, but because they can be built upon by anyone and everyone.

You could copy and paste this article into ChatGPT and say, “Please rewrite and paraphrase this blog post in such a way as to keep its main points and observations, but substantively reconfigure the text to make the original version undetectable.” And then, just like that, you have content for your own blog. So easy.

[…]

It is interesting to speculate about the future. It seems like people might eventually grow skeptical about investing their personal creativity in such a space, right? Will anyone bother writing on the internet when they know their words will be pilfered and junkified? What happens to the craft of writing itself when our de facto global platform for sharing text no longer reinforces or recognizes the role or rights of authorship?

[…]

Whether papyrus or the internet, humans doggedly write for influence, status, wealth, conviction, and pleasure. But the so-called sanctity of “authorship” is only a very recent idea. These “rights” of authorship are only true if they are enforced. They are a kind of fiction that only make sense in occasional times, places, and cultures. For the next chapter of the human experiment, I wonder if “authorship” will again recede into the background, as it often seems to do in times of disruptive changes in communication technology.

[…]

So, what’s the fun of writing on the internet anymore? Well, if your aim is to be respected as an author, there’s probably not much fun to be had here at all. Don’t write online for fame and glory. Oblivion, obscurity and exploitation are all but guaranteed. Write here because ideas matter, not authorship. Write here because the more robots, pirates, and single-minded trolls swallow up cyberspace, the more we need independent writing in order to think new thoughts in the future — even if your words are getting dished up and plated by an algorithm.

Source: James Shelley

The cause of our anger is not other people

A person stands at the tumultuous sea's edge under a stormy sky, symbolizing anger. They hold a compass for guidance and a bright red flame for energy. The sea and sky calm towards the horizon, transitioning to a serene landscape with a clear path forward. The palette includes Light Gray for the sky, Dark Gray for the sea, Bright Red for the flame, Yellow for the landscape, and Blue for the calm sea, embodying the transformation of anger into positive action through Stoic philosophy and nonviolent communication.

“Don’t use your anger for this, use it for that!" is the central message of an article in Vox. But if you reject the underlying premise of the article, that other people are the ‘cause’ of our anger, then the rest of it doesn’t make much sense.

You only have to meditate on the first few lines of The Enchiridion by Epictetus to learn that the cause of our emotions is our reactions to, and interpretations of, other people’s actions. Most Stoic philosophers teach the same.

That’s not to say putting into practice any of this is easy. Far from it. That’s why learning about FONT and Nonviolent Communication is important: it gives you an approach and a framework for dealing with situations without escalating them.

People are the root cause of anger. Everyone from romantic partners to leaders of foreign governments — and even ourselves — can make our blood boil. The way anger manifests varies, too. Anger is a punch, a scream, a red face, a silent brood, a river of tears. Anger is selfish (road rage) and selfless (protesting a war half a world away). This prickling, burning emotion — which can range from moderate irritation to complete rage — energizes us to come face-to-face with the wrongdoers, Martin says. When we’re angry, “our sympathetic nervous system activates our fight-or-flight response,” he says. “So our heart rate [is] increasing, our breathing increasing, and so on. That’s all a way to essentially give us the energy we need to fight back.”

There is an effective middle ground where anger can be leveraged to make positive change. When anger’s heat burns brightest is the time to make plans, says Jennifer Lerner, a professor of public policy, management, and decision science at the Harvard Kennedy School who also studies the effects of emotions on decision-making. But wait until the fire dulls to embers to take action.

If you do yearn to act impulsively, Lerner suggests using that energy to complete an item on your idealized wish list of things you hope to do in your spare time. (You know the one: signing up for a volunteer opportunity, picking up trash on your block, apologizing to a friend for forgetting their birthday.) “When you’re mad and you have a few minutes,” Lerner says, “just take something from your list and do it.”

Source: Vox

Every default macOS wallpaper in 6k

macOS Mojave background wallpaper (sand dune in desert)

Whichever operating system you’re using, having a beautiful image as your background image or screensaver is always a nice thing to have. This is a collection of every default macOS wallpaper – in 6K resolution!

Source: 512 Pixels

Building a Bonfire

Bonfire artwork

I’m delighted to see this article about Bonfire, a project I’ve contributed to on various occasions since it was forked from the codebase which underpinned MoodleNet.

I think Ivan and Mayel, the team behind Bonfire, have identified a really important niche in Open Science, although the technology they are building can be applied to pretty much anything.

Bonfire is inching ever closer towards a 1.0 release of its social offering, which is a landmark development for the project. But beneath the surface, there’s a bigger story going on: rather than simply being a social platform, it’s also a development framework.

As a project, Bonfire has been in development for a long time, taking on different shapes and forms throughout the years. It first emerged as CommonsPub, in an effort to bring ActivityPub federation to MoodleNet. After a long refactor and refocus, Bonfire seems to be hitting its stride.

[…] I want to take a moment to peel back the layers of Bonfire, because I think they really set it apart from other platforms. The vision for the project is incredibly unique: “we have all the pieces you need, all you have to do is assemble it.”

Source: We Distribute

AI-generated video is coming for your reality

It’s been almost impossible to miss the announcement from OpenAI, the creators of ChatGPT and DALL-E) about Sora “an AI model that can create realistic and imaginative scenes from text instructions”. While this isn’t available to the general public yet (thankfully, given upcoming elections!) this is what’s on the horizon.

There’s a great overview and explainer from YouTuber MKBHD which I recommend. It’s important to remember that, while tech companies will point to things like C2PA as safeguards, the only real ways to protect your information landscape are: a) get your news from reputable sources, b) be skeptical about things that sound unlikely and go looking for other sources, and c) immerse yourself in new things like this so you start being able to recognise giveaway signs.

MKBHD does a good job of starting to point out some of the latter in the video above. Again, I suggest you watch it.

Brexit means Brexit in football, too

Leeds United v Rotherham United on 10 Feb: Leeds defender Connor Roberts makes a tackle Photograph: Simon Davies/ProSports/REX/Shutterstock

It’s taken The Guardian about five years, I reckons, to pick up on this phenomenon. My son and his mates were doing ‘Brexit tackles’ well before the start of the pandemic!

In one TikTok post, football content creator Kalan Lisbie, with tongue firmly in cheek, walks viewers through “how to do the Brexit tackle”. He informs us that “the first thing you need to do is pretend like you’re going to boot the ball away and not tackle. Second thing is that you want to rotate those hips and as soon as you rotate, you want to take absolutely everything … and then just clean him”. A commenter on another video notes that school football is now more like WWE.

[…]

There’s a healthy dose of irreverence in there too – you have to admit, there’s something very funny about one child barking “Brexit means Brexit!” to another in a muddy park. You get the sense they’re having fun at older generations’ expense. Ask any parent of a tweenager or older: no one is better able to comprehensively make fun of, or call attention to, adult flaws and hypocrisy.

By adopting “Brexit means Brexit” and transforming it into a symbol of almost dangerously rough play, you get the sense that children are holding up a mirror to the adult world. They’re using it as a joke, to be sure, but it’s a timely reminder that politicians’ words and political stances extend far beyond the immediate context, seeping into the fabric of our children’s lives.

Source: The Guardian

Writing, personal branding, and capitalism

Tiny supermarket trolley amongst stacks of books

Suw Charman-Anderson reflects on something that has definitely shifted over my lifetime: writing for money. These days, we live in the ‘creator economy’ which bears as much relation to reality as the ‘sharing economy’ does to the world of Airbnb, etc.

It’s related to the idea discussed in another article that’s been doing the rounds from Vox in which Rebecca Jones bemoans the need for ‘personal branding’ in every walk of life these days.

I’ve been running my own business since 1998, and I don’t want to have to bring that sensibility to my writing. I don’t like doing ‘promo’ and trying to ‘build a platform’ – I just want to share my writing with people whom I hope will enjoy it. I don’t want to get to a point where I’m spending more time doing marketing than writing. And yet, this is what is in store. 

It used to be that success brought fame. Now you need to be famous in order to even get a shot at success. Substack was supposed to be a way out of that double bind, but it isn’t. In her blog post, The creator economy can’t rely on Patreon, Joan Westenberg points out that Patreon and Substack are just flogging Kevin Kelly’s 1,000 True Fans theory from 2008.

[…]

The creative industries, like so many others, have individualised risk and privatised profits. So even though the creative industries sector contributed £109 billion to the UK economy in 2021 – that’s 5.6 percent of the entire economy – actual creatives go largely underpaid. We have become commodities. Until we are famous, we are entirely fungible. No one likes to think that about themselves, but this is what the industry has done to us. 

[…] I enjoy writing my newsletters, and I will continue to write them in the hope that others enjoy reading them. However, they will not figure in my financial plans, whether short-term or long-term. Any income they generate is gravy, it’s not the roast. 

[…]

Much of my focus is now on conserving energy so that I have enough to spend on writing and actual paying work. This is about developing a sustainable way to live which pays the bills and leaves me enough space to be creative. I don’t want to have to sacrifice my precious writing time at the altar of building a platform, even if that makes me less attractive to publishers. 

Source: Why Aren’t I Writing?

Generative AI means we need to use art school approaches to assessment

Drawing of a horse at different levels of fidelity, with lines indicating 1.1, 2.1, and 2.2 (which relate to classes of degree). The author is indicating that this approach is misguided.

Great post by Dave White, who works at University of the Arts, London. His point, which is well-made, is that in the world of Generative AI, we have to take an art school approach to… everything.

It’s interesting, because I can see elements of metacognition and systems thinking in all this. This kind of thing, along with the ways I’ve been using Generative AI in my own studies, make me cautiously optimistic.

Let’s say I set you the task of creating a picture of a horse, you can achieve this any way you want. The catch is that you have to explain why you have taken a certain approach, what you think the value of this approach is and the extent to which you have been successful relative to that value. (Importantly, you can also reflect on how you might have failed to do this).

You can use all kinds of tools to construct this story: theory, method, process, your identity, your cultural influences and experiences, a chosen canon of relevant work etc. This forms the narrative of your work and this can be assessed. 

[…]

[T]here are many similarities in the questions raised by Gen AI and Wikipedia because they are both technologies of cultural production which rapidly emerged in the public domain. This is a category of technology we consistently struggle with because it recategorises forms of labour and professional identities.

[…]

In the same way that copying and pasting from Wikipedia has very little value but can be very useful, so too with Gen AI. In practice this means much of what we characterised as creative work is being merged into broader notions of ‘production’, something Tobias Revell has discussed in terms of Design potentially ceasing to be a specialist field. 

[…]

Under these circumstances there is an imperative to teach beyond ‘good’, thereby equipping our graduates to swim to the surface of imitation and operate above the ever rising tide of skills-that-can-now-be-done-by-generalists.

Source: Dave White

Eye-opening heat map study

Four images showing 'heatmaps' of areas of interest comparing men and women

Perhaps sadly unsurprising to anyone who has ever talked about this with women, or who has lived as a child in an area that is less-than-safe.

As an adult male, being able to walk through the world without worrying about safety is a privilege. And there are definitely things we can do to help women feel more safe.

An eye-catching new BYU study shows just how different the experience of walking home at night is for women versus men.

The study, led by BYU public health professor Robbie Chaney, provides clear visual evidence of the constant environmental scanning women conduct as they walk in the dark, a safety consideration the study shows is unique to their experience.

Chaney and co-authors Alyssa Baer and Ida Tovar showed pictures of campus areas at Utah Valley University, Westminster, BYU and the University of Utah to participants and asked them to click on areas in the photo that caught their attention. Women focused significantly more on potential safety hazards — the periphery of the images — while men looked directly at focal points or their intended destination.

Source: BYU News

First Thought Shrapnel 'newsletter' via micro.blog!

A grand ship ready to set sail on the vast ocean of knowledge, detailed in bright red and blue, dominates this imaginative stage design. The vibrant blue sea with yellow highlights suggests a new beginning at sunrise, with figures boarding the ship in anticipation. The sky transitions from light gray to dark gray, providing a dramatic backdrop for the voyage, while mythical creatures symbolize the challenges and adventures ahead.

If you’re reading this, and have previously subscribed to Thought Shrapnel by email, then great! Everything’s working! If you subscribe via other means, you can safely ignore this post.

Apologies for the radio silence. This has been due to some technical issues with micro.blog and also quite an intense time around buying a house and getting my MSc assignment completed.

From now on, you’ll get an auto-generated email on a Sunday containing posts I’ve published on Thought Shrapnel during the week. This should be more sustainable for me, but I recognise that it lacks a bit of a personal touch. Apologies that I can’t control what time you receive it.

This is the only post I’m publishing on Thought Shrapnel this week, so it should be the only one that is featured in the digest email.

Image: DALL-E 3

The death of consensus reality

I mentioned the podcast Your Undivided Attention in a recent post. Last summer, I listened to an episode featuring Nita Farahany which I thought was excellent. I told everyone about it.

In this interview, Farahany is interviewed alongside Aza Raskin, one of the hosts of Your Undivided Attention. I’ve focused on Raskin’s answers, but you should read the whole thing, alongside listening to the podcast episode. Excellent stuff.

Nita Farahany, Aza Raskin, and Jane Metcalfe at the BrainMind Summit.

Aza Raskin: I think we can frame social media as “first contact with AI.” Where is AI in social media? Well, it’s a curation AI. It’s choosing which posts, which videos, which audio hits the retinas and eardrums of humanity. And notice, this very unsophisticated kind of AI misaligned with what was best for humanity. Just maximizing for engagement was enough to create this whole slew of terrible outcomes, a world none of us really wants to live in. We see the dysfunction of the U.S. government—at the same time that we have runaway technology we have a walk-away governance system. We have polarization and mental health crises. We don’t know really what’s true or not. We’re all in our own little subgroups. We’ve had the death of a consensus reality, and that was with curation AI—first generation, first contact AI.

We’re now moving into what we call “second contact with AI.” This is creation AI, generative AI. And then the question to ask yourself is, have we fixed the misalignment with the first one? No! So we should expect to see all of those problems just magnified by the power of the new technology 10 times, 100 times, 1,000 times more.

[…]

I think this is the year that I’ve really felt that confusion between “Is it to utopia or dystopia that we go?” And the lesson we can learn from social media is that we can predict the future if you understand the incentives. As Charlie Munger, Warren Buffett’s business partner, said, “If you show me the incentives, I’ll show you the outcome.” The way we say it is: “If you name the market race people are in, we can name the result.” The race is the result. And Congress is still sort of blind to that. And so we’re stuck in this question of do we get the promise? Do we get the peril? How can we just get the promise without the peril, without an acknowledgment of, well, what’s the incentive? And the incentive is: grow as fast as possible to increase your capabilities, to increase your power so you can make more money and get more compute and hire the best people. Wash, rinse, repeat without an understanding of what are the externalities. And humanity, no doubt, has created incredible technology. But we have yet to figure out a process by which we invent technology that then doesn’t have a worse externality, which we have to invent something new for. And we’re reaching the place where the externality that we create will break the fragile civilization we live in if we don’t get there beforehand.

Source: Social Media, AI, and the Battle for Your Brain | proto.life

Preparing for a year of electoral disinformation

I listened to an interesting episode of the Your Undivided Attention podcast a few days ago which approached questions around AI from the perspective of myth.

One of the points that was made was that we’ve lost the ability for councils of elders to stop things from happening because it’s likely to be dangerous for community cohesion. Now it’s “move fast and break things”. With AI the ‘things’ could be democracy, civilization, or perhaps even the planet.

The token gestures discussed in this article from companies like OpenAI are like spitting in the wind. I mean, it’s great that people can’t just ask ChatGPT to create something impersonating a politician, and that images will be watermarked as generated by AI. But even I wouldn’t find it that hard to generate reasonably-convincing deepfakes given available tools.

As I’ve found through work I’ve done on disinformation, people are looking for content which confirms their existing beliefs. This means that you don’t have to create things that are particularly sophisticated for disinformation to go viral. And then by the time it’s debunked, more stuff has come out. It’s a game of whack-a-mole, except (to extend the metaphor) the moles have the potential to explode.

OpenAI logo

Yesterday TikTok presented me with what appeared to be a deepfake of Timothee Chalamet sitting in Leonardo Dicaprio’s lap and yes, I did immediately think “if this stupid video is that good imagine how bad the election misinformation will be.” OpenAI has, by necessity, been thinking about the same thing and today updated its policies to begin to address the issue.

In addition to being firmer in its policies on election misinformation OpenAI also plans to incorporate the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials into images generated by Dall-E “early this year”. Currently Microsoft, Amazon, Adobe, and Getty are also working with C2PA to combat misinformation through AI image generation.

…Given that AI is itself a rapidly changing tool that regularly surprises us with wonderful poetry and outright lies it’s not clear how well this will work to combat misinformation in the election season. For now your best bet will continue to be embracing media literacy. That means questioning every piece of news or image that seems too good to be true and at least doing a quick Google search if your ChatGPT one turns up something utterly wild.

Source: Here’s OpenAI’s big plan to combat election misinformation | The Verge

Doing something about the UK schooling class divide

In the UK, prices of family-sized homes are closely linked to the Ofsted rating of local schools. This leads to segregation based on ability to pay. As people who are in favour of private schools have told me, this means that any arguments I make against paying for education are a bit hypocritical.

My kids have had a much better schooling and in a safer area than I grew up in. Every parent wants this for their children. But by segregating schooling based on income, we turn it into a game that middle class parents play to win.

So what’s being proposed in Brighton is huge: essentially de-coupling house prices from school admissions. I hope that it takes off, and it becomes the norm. It takes a while to see and feel the class system in England in particular. But once you do, you can’t avoid the systemic injustice of it all.

Person with Waitrose bag on their head saying 'I don't see the problem. I always did well at school...'

As any estate agent knows, a school judged outstanding by Ofsted will push up neighbouring property prices. This is a cruel system that drives families who can afford it to uproot themselves, makes many of those who cannot feel inadequate, and produces and intensifies social segregation.

Few would dispute this account. Not the government, which has published papers on the link between house prices and schools, nor academics or analysts: just last week the Sutton Trust published findings showing that 155 comprehensives, supposedly open to all, are more socially selective than a typical grammar. In Scotland, home addresses are assigned one secondary school so that, as the Institute for Fiscal Studies points out, social segregation there is even more marked.

Rarely does any of this feature in the discussion around raising school standards. Ministers and policy experts talk about Sats, school curricula, inspections – rather than bringing down the invisible barriers that go up for children as early as five. Which is why Brighton and Hove is worth watching. On Monday, its Labour-led council will vote to change secondary school admissions. Councillors propose to make local authority secondaries give priority to children on free school meals over pupils from the catchment area. Observers believe that Brighton and Hove will be the first council ever to do this. The move is an attempt to reduce inequality within a highly unequal city, to mix up school populations, and to give pupils access to sought‑after schools. For a city that prides itself on being progressive and inclusive, this is a big step towards living those values.

Source: The Guardian view on school reform: southern discomfort about the class divide | The Guardian

Image: CC BY-ND Visual Thinkery

Shared persuasion tactics

I feel like this fits well with some stuff WAO has been revisiting this week around challenger brands and crafting messages for specific audiences.

Composite image of politicians and company logos

The same forces that are driving the rise of populism in politics are also used by startups to grow their business.

Here’s are political strategies that businesses use to grow:

  1. The power of the outsider narrative
  2. Single issue voters
  3. Grassroots Mobilisation
  4. Narrative Control and Messaging
  5. Building Alliances and Partnerships
  6. Segmentation and Targeting

[…]

The key takeaway is that inspiration can be drawn from the most unexpected places. From modern politics and entrepreneurship, there’s always something new to learn, adapt, and apply to your own endeavours.

Source: The shared persuasion tactics of politics and startups

An 'anti-social network' you post to via email subject lines

On the one hand, this is awesome. On the other, what would I use it for?

Mine’s here. Don’t expect much! I think if I wanted something like this I’d probably use telegra.ph instead. Although it does give off a Posterous vibe from ~15 years ago. (I see Posthaven still exists!)

Screenshot of Daft Social

Daft Social lets you post and share notes, links or images by email subject only. From any email account.

Source: Daft Social

Welcome to the new home of Thought Shrapnel! Excuse the mess while we unpack boxes, etc.

What is degrowth communism?

This interview with Kohei Saito in EL PAÍS talks about the importance of having a positive view of the future, with “a society that adapts to the limits of nature and offers universal access to education, health, transportation, internet”.

Sounds good to me.

The image created depicts a peaceful, sustainable community thriving in harmony with nature, focusing on the concept of degrowth. The scene includes community gardens, renewable energy sources like wind turbines and solar panels, and people of diverse backgrounds engaging in educational and artistic activities. The color palette of light gray, dark gray, bright red, yellow, and blue symbolizes a vibrant, sustainable way of living that emphasizes environmental harmony and a shift away from industrial excess.
We are in a chronic state of emergency. The pandemic was not the last crisis, but rather the beginning of more problems. We should not forget that moment [during lockdown] when, consciously, we halted capitalism. It seemed impossible. But it happened. For a short time. A good moment to establish some distance: people came back more anti-capitalist and inclined towards degrowth. Let’s remember that.

[…]

I talk about a degrowth communism: a society that adapts to the limits of nature and offers universal access to education, health, transportation, internet… Due to a variety of crises, access to these services — the common good — has been undermined for many. But without positive visions of the future, there will be more and more discontent. What we need is to build a broad movement: environmentalist, working-class, feminist, Indigenist… To propose an inclusive and emancipatory future.

[…]

The Anthropocene signifies that humans have become a geological force, with the ability to modify the planet. But not everyone is equally responsible for this situation. It’s primarily the people of the Global North; particularly, the super-rich who think they can do it all with their money, even flee the Earth. That idea of conquest originates with European colonialism, linking imperialism, capitalism and progress. We should also restrict space shuttles, like SpaceX. Spending so much money, effort and time on going to Mars seems stupid to me; we should invest that energy in saving our planet. As a philosopher, I’m an optimist. Our perception, our values, can change in two or five years. Opportunities for change are everywhere. I want to explore what they are.

Source: Kohei Saito, philosopher: ‘Spending so much money, effort and time on going to Mars is stupid’ | Climate | EL PAÍS English

Spy windows?

No technology is neutral, and vendors are only ever going to tout the positive qualities. Take this example: it’s a way to create a camera out of any window. Huge benefits, as the article says, but also some rather large (and dystopian) downsides.

The image depicts a futuristic glass door on the front of a modern corporate building, reflecting a cityscape with skyscrapers under a sky with clouds. The glass features a holographic facial recognition system with a green circle and lock icon surrounding the reflection of a woman's face with short hair and glasses, indicating access has been granted.

Zeiss is bringing its remarkable Holocam technology to CES 2024, which can turn any glass screen into a camera. This means that everything from the window in your car to the screen on your laptop to the glass on your front door can now possess an invisible image sensor.

[…]

The Holocam technology “uses holographic in-coupling, light guiding and de-coupling elements to redirect the incoming light of a transparent medium to a hidden image sensor.”

[…]

Using an entire pane of glass as a camera lens also opens some fascinating optical possibilities. Some of Zeiss' bullet points include “large aperture invisible camera” and “individual adjustment of orientation and size of the field of views.” Which makes me wonder, what is the maximum aperture and focal range of a camera like this?

Of course, there’s a darker potential for such technology. Given the current fear around hidden cameras in Airbnbs, the idea of every single window (or even shower door) in a rental property being able to spy on you is a little disconcerting.

Source: This holographic camera turns any window into an invisible camera | Digital Camera World

We become what we behold

An insightful and nuanced post from Stephen Downes, who reflects on various experiences, from changing RSS reader through to the way he takes photographs. What he calls ‘AI drift’ is our tendency to replace manual processes with automated ones.

What I appreciate is that Downes doesn’t say this is A Bad Thing, but that we should notice and reflect on these things. For example, I’ve found it really useful to use AI with my MSc studies and to understand (and accelerate) some of the client work I’ve been involved with.

 This image depicts a person in a dimly lit room, surrounded by stacks of books and papers, focusing on a bright computer screen. The room fades from bright red near the screen to dark gray in the corners, with yellow sticky notes scattered around. The light gray walls are adorned with fading pictures, representing the neglected interests due to 'AI drift'.
What's important is to notice what's happening. When I use AI to select the posts I read in my RSS reader, I'm finding more from the categories I've defined, but I'm missing the new stuff from categories that might not exist yet - the oft-referenced filter bubble. Also, I'm missing the ebb and flow of the undercurrent, of the comings and goings, of the stuff that seems off topic and doesn't matter - and yet, to someone who dwells in the debris like me, it does.

This is what I’m calling ‘AI drift’ in humans. It’s this phenomenon whereby you sort of ‘drift’ into new patterns and habits when you’re in an AI environment. It’s not the filter bubble; that’s just one part of it. It’s the influence it has over all our behaviour. One of those patterns, obviously, is that you start relying on the AI more do do things. But also, you stop doing some of the things you used to do - not because the AI is handling it for you, because as in this case it might not be helping at all, but because you just start doing other things.

[…]

AI drift isn’t inherently good, and it isn’t inherently bad. It just is. It’s like that quote often attributed to McLuhan: “We become what we behold. We shape our tools and then our tools shape us.” Recognizing AI drift is simply recognizing how we’re changing as we use new tools. We then decide whether we like that change or not. In my own case, it comes with some mixed feelings. But that’s OK. I wouldn’t expect anything else.

Source: AI Drift | Half an Hour

Your future is statistically more likely to be better than your past

Another fantastic article by Arthur C. Brooks for The Atlantic which draws on research about how your future is likely to be happier than your past. That’s because of various psychological effects that come into play as you age.

Good news! I’m particularly looking forward to my anxiety tamping down and not being as triggered by negative situations.

A surreal image depicting an abstract figure, made of clock hands and gears, standing at the edge of a cliff. The sky transitions from light gray at the horizon to deep blue at the top. The ground is a mosaic of calendar pages, some fluttering in the wind.
Let’s start with how you will feel when you are old. By this, I don’t mean whether your back will hurt more (it almost certainly will), but rather the balance between your positive and negative moods as you age. The answer is probably better than you feel now.

[…]

A 2013 review of research reveals that older people develop at least three distinct emotional skills: They react less to negative situations, they are better at ignoring irrelevant negative stimuli than they were when younger, and they remember more positive than negative information. This is almost like a superpower many older people have, that they know negative emotions won’t last so they get a head start on feeling good by consciously disregarding bad feelings as they arise.

[…]

If you follow the typical development, you can expect to be nicer and kinder, and less depressed and anxious, when you are old.

[…]

The good news about aging is that if we simply leave things to the passage of time, life will probably get better for us. But we can do more than just wait around to get old. We can lean into the natural improvements and manage any trends we don’t like.

Source: How to Be Happy Growing Older | The Atlantic

Image: DALL-E 3

Logical fallacies, cognitive biases, and more

I always enjoy posts like this because I invariably learn something new. There’s some gems in here, some I hadn’t come across before, and some I had.

There are plenty of logical fallacies and cognitive biases amongst the ideas, which reminds me of this from Buster Benson. I’ve had a large poster of the linked image on the wall of my home office and it was always something people commented on.

The image illustrates a fragile glass world on the edge of a cliff, with a lone figure in red standing at the brink, against a backdrop of light and dark gray skies.
Woozle Effect: “A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth.” - Daniel Kahneman.

[…]

Fact-Check Scarcity Principle: This article is called 100 Little Ideas but there are fewer than 100 ideas. 99% of readers won’t notice because they’re not checking, and most of those who notice won’t say anything. Don’t believe everything you read.

[…]

Emotional Competence: The ability to recognize others’ emotions and respond to them productively. Harder and rarer than it sounds.

Source: 100 Little Ideas | Collab Fund Image: DALL-E 3

Would you survive in medieval Europe?

Realistically, I’m never going to watch an hour-long YouTube video which is mainly a talking head. I mean, I’m into history, but I’m not that into it.

Thankfully, Open Culture has summarised some of the most important points. If you’re the kind of person who watches a lot of YouTube, then maybe you want to add this to your queue?

An intricately detailed illustration in the style of a medieval manuscript, depicting a lively street scene with Gothic architectural elements. The image is populated with figures dressed as merchants and pilgrims, in a color palette of light gray, dark gray, bright red, yellow, and blue, capturing the vibrancy of medieval Europe. The borders are adorned with floral motifs, enhancing the manuscript's authentic feel.

In the new video above, his­to­ry Youtu­ber Pre­mod­ernist pro­vides an hour’s worth of advice to the mod­ern prepar­ing to trav­el back in time to medieval Europe — begin­ning with the dec­la­ra­tion that “you will very like­ly get sick.”

The gastrointestinal distress posed by the “native biome” of medieval European food and drink is one thing; the threat of robbery or worse by its roving packs of outlaws is quite another. “Crime is rampant” where you’re going, so “carry a dagger” and “learn how to use it.” In societies of the Middle Ages, people could only protect themselves by being “enmeshed in social webs with each other. No one was an individual.” And so, as a traveler, you must — to put it in Dungeons-and-Dragons terms — belong to some legible class. Though you’ll have no choice but to present yourself as having come from a distant land, you can feel free to pick one of two guises that will suit your obvious foreignness: “you’re either a merchant or a pilgrim.”

Source: Advice for Time Traveling to Medieval Europe: How to Staying Healthy & Safe, and Avoiding Charges of Witchcraft | Open Culture

Image: DALL-E 3

The rich are scared we're going to eat them

I’m reading Roots at the moment, the novel by Alex Haley about an African man captured and sold into slavery. I’m at the point of the story where his daughter’s ‘massa’ gets spooked about a slave uprising.

It’s difficult not to draw parallels when reading about an apparent trend towards billionaires building luxury ‘bunkers’ with supplies and blast-proof doors. They would do well to worry, given the amount of inequality in the world.

A multi-level, circular billionaire's retreat that resembles a stage set, with a central living space featuring a couch with yellow and blue pillows. Surrounding the living area are various high-tech stations and secure vaults, along with a self-contained ecosystem on the upper level. The space is adorned in light and dark grays, with red and blue accents, suggesting a luxurious yet fortified sanctuary.
One prevalent speculation that has circulated suggests that these billionaires might possess knowledge beyond the scope of the average person. The idea is that their vast resources are being channeled into constructing secure retreats as a form of preparation for potential global upheavals or crises. This speculation plays into the notion that these elite individuals may be privy to information that the general public is not, prompting them to take unprecedented measures to safeguard their well-being. Moreover, some fear that the escalating global tensions and geopolitical uncertainties may be driving these billionaires to prepare for worst-case scenarios, including the prospect of war.
Source: Zuckerberg's Bunker Plans Fuel Speculation on Billionaires Building Bunkers | Decode Today

Image: DALL-E 3

Remember distinct music scenes and culinary traditions? Yeah, they're coming back.

Anything that Anil Dash writes is worth reading and this, his first article for Rolling Stone, is no different. I haven’t quoted it here, but I love the first paragraph. What goes around, comes around, eh?

This is a vibrant and highly detailed image depicting a fantastical scene reminiscent of a stage set for an imaginary play. The artwork is rich with various elements and layers, featuring multiple colorful structures that resemble different themed areas or sets. On the left, there's a golden-yellow structure with green accents, platforms, and staircases that evoke a bustling market or social hub, with tiny figures that appear to be people engaging in various activities. Centered in the image is a towering cityscape with blue and black skyscrapers rising among white, fluffy clouds against a clear sky. To the right, the scene turns darker with red and black twisted trees and buildings that have a more ominous vibe, including some structures that are on fire and surrounded by dark birds. The entire image is a blend of whimsy and chaos, with numerous birds in flight throughout, some carrying symbols like hearts and crosses. There are also splashes of paint and abstract elements scattered across the image, contributing to the surreal, dreamlike atmosphere. The overall color scheme includes bright red, yellow, blue, and varying shades of dark gray, all set against a light blue background that suggests a waterside setting at the bottom edge of the image.
[T]his new year offers many echoes of a moment we haven’t seen in a quarter-century. Some of the most dominant companies on the internet are at risk of losing their relevance, and the rest of us are rethinking our daily habits in ways that will shift the digital landscape as we know it. Though the specifics are hard to predict, we can look to historical precedents to understand the changes that are about to come, and even to predict how regular internet users — not just the world’s tech tycoons — may be the ones who decide how it goes.

[…]

We are about to see the biggest reshuffling of power on the internet in 25 years, in a way that most of the internet’s current users have never seen before. And while some of the drivers of this change have been hyped up, or even over-hyped, a few of the most important changes haven’t gotten any discussion at all.

[…]

Consider the dramatic power shift happening right now in social media. Twitter’s slide into irrelevance and extremism as it decays into X has hastened the explosive growth of a whole host of newer social networks. There’s the nerdy vibes of the noncommercial Mastodon communities (each one with its own set of Dungeons and Dragons rules to play by), the raucous hedonism of Bluesky (like your old Tumblr timeline at its most scandalous), and the at-least-it’s-not-LinkedIn noisiness of Threads, brought to you by Instagram, meaning Facebook, meaning Meta. There are lots more, of course, and probably another new one popping up tomorrow, but that’s what’s great about it. A generation ago, we saw early social networks like LiveJournal and Xanga and Black Planet and Friendster and many others come and go, each finding their own specific audience and focus. For those who remember a time in the last century when things were less homogenous, and different geographic regions might have their own distinct music scenes or culinary traditions, it’s easy to understand the appeal of an online equivalent to different, connected neighborhoods that each have their own vibe. While this new, more diffuse set of social networks sometimes requires a little more tinkering to get started, they epitomize the complexity and multiplicity of the weirder and more open web that’s flourishing today.

[...]

I’m not a pollyanna about the fact that there are still going to be lots of horrible things on the internet, and that too many of the tycoons who rule the tech industry are trying to make the bad things worse. (After all, look what the last wild era online lead to.) There’s not going to be some new killer app that displaces Google or Facebook or Twitter with a love-powered alternative. But that’s because there shouldn’t be. There should be lots of different, human-scale alternative experiences on the internet that offer up home-cooked, locally-grown, ethically-sourced, code-to-table alternatives to the factory-farmed junk food of the internet. And they should be weird.

Source:  The Internet Is About to Get Weird Again | Rolling Stone

Image: DALL-E 3

Giving up is an attempt to make a different future

This is some incredible writing from psychotherapist Adam Phillips. It’s an edited extract from his forthcoming book On Giving Up and is based on the subtle difference between ‘giving up’ something and… just giving up.

It’s a really important read, at least for me, and particularly poignant at the start of the year. The fact that he talks about Montaigne (one of my favourite authors) and Marion Milner’s demarcation of different forms of attention makes this a highly recommended read. It’s long, but worth it.

I’ve almost picked at random a section to quote here because it’s all fantastic.

A wide, imaginative illustration capturing the essence of 'Giving up is an attempt to make a different future.' The scene depicts a seamless blend of characters in various states of surrender and aspiration, symbolizing the complex interplay between relinquishing and pursuing. The continuous landscape merges elements of hope and despair, reflecting the subtlety of the concept. Subtle references to Montaigne and Marion Milner, like books and thoughtful symbols, are integrated throughout.
There are, to put it as simply as possible, what turn out to be good and bad sacrifices (and sacrifice creates the illusion – or reassures us – that we can choose our losses). There is the giving up that we can admire and aspire to, and the giving up that profoundly unsettles us. What, for example, does real hope or real despair require us to relinquish? What exactly do we imagine we are doing when we give something up? There is an essential and far-reaching ambiguity to this simple idea. We give things up when we believe we can change; we give up when we believe we can’t.

All the new thinking, like all the old thinking, is about sacrifice, about what we should give up to get the lives we should want. For our health, for our planet, for our emotional and moral wellbeing – and, indeed, for the profits of the rich – we are asked to give up a great deal now. But alongside this orgy of improving self-sacrifices – or perhaps underlying it – there is a despair and terror of just wanting to give up. A need to keep at bay the sense that life may not be worth the struggle, the struggle that religions and therapies and education, and entertainment, and commodities, and the arts in general are there to help us with. For more and more people now it seems that it is their hatred and their prejudice and their scapegoating that actually keeps them going. As though we are tempted more than ever by what Nietzsche once called “a will to nothingness, a counter-willan aversion to life, a rebellion against the most fundamental presuppositions of life”.

The abiding disillusionment with politics and personal relationships, the demand for and the fear of free speech, the dread and the longing for consensus and the coerced consensus of the various fundamentalisms has created a cultural climate of intimidation and righteous indignation. It is as if our ambivalence about our aliveness – about the feeling alive that, however fleeting, sustains us – has become an unbearable tension and needs to be resolved. So even though we cannot, as yet, imagine or describe our lives without the idea of sacrifice, and its secret sharer, compromise, the whole notion of what we want and can get through sacrifice is less clear; both what we think we want and what we are as yet unaware of wanting. The formulating of personal and political ideals has become either too assured or too precarious. And the whole notion of sacrifice depends upon our knowing what we want.

Source: What we talk about when we talk about giving up | The Guardian

Image: DALL-E 3

We already have solutions for a lot of problems, we just don’t use them

A belated Happy New Year, and what better way to start off 2024 than by this reminder that quite a lot of what’s holding us back in the world is political will and societal coordination.

Abstract and imaginative illustration showcasing a dramatic contrast between technological advancement and societal challenges. On one side, a surreal, technologically advanced cityscape with whimsical structures and futuristic elements is depicted. On the other side, diverse individuals are portrayed with exaggerated expressions and unique, fantastical clothing, set against a backdrop of abstract forms and symbols.
I remember growing up with that same old adage of how you could be the next scientist to invent a cure for cancer, or a solution to climate change, or whatever. What they don’t tell you is that we already have solutions for a lot of problems, we just don’t use them. Sometimes this is because the solution is too expensive, but usually it’s because competing interests create a tragedy of the commons. Most problems in the modern age aren’t complicated engineering problems, they’re the same problem: coordination failure.

[…]

We actually have a cure for blood cancer now, by the way. Like, we’ve done it. It’s likely that a similar form of immunotherapy will generalize to most forms of cancer. Unfortunately, the only approved gene therapy we have is for sickle-cell disease and costs $2 million per patient, so most people in America simply assume they will never be able to afford any of these treatments, even if they were dying of cancer, because insurance will never cover it. This is actually really bad, because if nobody can afford the treatment, then biotech companies won’t bother investing into it, because it’s not profitable! We have built a society that can’t properly incentivize CURING CANCER. This is despite the fact that socialized healthcare is a proven effective strategy (as long as the government doesn’t sabotage it). We could fix this, we just don’t.

[…]

It’s January 1st of the new year, and with all these people wishing each other a “better year”, I am here to remind you that it will only get worse unless we do something. Society getting worse is not something you are hallucinating. It cannot be fixed by you biking to work, or winning the lottery. We are running on the fumes of our wild technological progress of the past 100 years, and our inability to build social systems that can cooperate will destroy civilization as we know it, unless we do something about it.

Source: We Could Fix Everything, We Just Don’t | Erik McClure

Image: DALL-E 3

Best of Thought Shrapnel 2023

Hello hello. I hope you're well 🙂

According to my stats, the following posts, all published in the last 12 months, were the most accessed on Thought Shrapnel.

What were your favourites? Is it one on this list? The archives can be found here.


1. The burnout curve

Published: 11th September

The Burnout-Growth Curve

I stumbled across this on LinkedIn. There doesn’t seem to be an authoritative source yet other than the author’s (Nick Petrie) social media posts, which is a shame. So I’m quoting most of it here so I can find and refer to it in future.

Read the post


2. AI writing detectors don't work

Published: 9th September

Person covering their eyes with one hand and making the 'stop' sign with the other.

This article discusses OpenAI’s recent admission that AI writing detectors are ineffective, often yielding false positives and failing to reliably distinguish between human and AI-generated content. They advise against the use of automated AI detection tools, something that educational institutions will inevitably ignore.

Read the post


3. Oh great, another skills passport

Published: 25th September

People working

This not only is the wrong metaphor, but it diverts money and attention from fixing some of the real issues in the system.

Read the post


4. Good news on Covid treatments

Published: 16th September

Person in biosuite attacking a Covid spike protein

Well this is promising. Researchers have identified a critical weakness in COVID-19 in its reliance on specific human proteins for replication. The virus has an “N protein” which needs human cells to properly package its genome and propagate. Apparently, blocking this interaction could prevent the virus from infecting human cells.

Read the post


5. The punishment for being authentic is becoming someone else's content

Published: 9th September

Crack in road with plaster/band-aid stuck over it

What I think is interesting is how online and offline used to be seen as completely separate. Then we realised the impact that offline life had on online life, and now we’re seeing the reverse: Instagram, TikTok, etc. having a huge impact on the spaces in which we exist offline.

Read post


6. Using AI to aid with banning books is another level of dystopia

Published: 17th August

However, what I’m concerned about is AI decision-making. In this case, a crazy law is being implemented by people who haven’t read the books in questions who outsource the decision to a language model that doesn’t really understand what’s being asked of it.

Read post


7. A philosophy of travel

Published: 30th August

Traveller in a bubble in a landscape

This article critically examines the concept of travel, questioning its oft-claimed benefits of ‘enlightenment’ and ‘personal growth’. It cites various thinkers who have critiqued travel (including one of my favourites, Fernando Pessoa) suggesting that it can actually distance us from genuine human connection and meaningful experiences.

Read post


8. We need to talk about AI porn

Published: 25th August

Screenshot with blurred image and red button saying 'Upgrade to Basic'. Explanation underneath explains NSFW is only available to premium members.

As this article details, a lot of porn has already been generated. Again, prudishness aside relating to people’s kinks, there are all kind of philosophical, political, legal, and issues at play here. Child pornography is abhorrent; how is our legal system going to deal with AI generated versions? What about the inevitable ‘shaming’ of people via AI generated sex acts?

Read post


9. Update your profile photo at least every three years

Published: 11th January

Person looking at camera

I think this is good advice. I try to update mine regularly, although I did realise that last year I chose a photo that was five years old! I prefer ‘natural’ photos that are taken in family situations which I then edit, rather than headshots these days.

Read post


10. Britain is screwed

Published: 8th February

Chart showing UK as bottom of the table in terms of benefits in unemployment as a share of previous income.

I followed a link from this article to some OECD data which, as shown in the chart below, the UK has even lower welfare payments that the US. The economy of our country is absolutely broken, mainly due to Brexit, but also due to the chasm between everyday people and the elites.

Read post


Have a happy new year when it arrives!

Doug

PS I've given up on Substack and, because I'm tired of moving platforms, I think I'll just send out emails via this site for now. More news on that soon.

Back next year!

Sign saying 'See you later'

That's it for Thought Shrapnel for 2023. Make sure you're subscribed for when we're back next year! (RSS / newsletter)

Image: Unsplash

Avoiding the 'Dark Triads'

Arthur C. Brooks, whose writing I always enjoy, writes on sociopaths, narcissists, and ‘Dark Triad’ personalities. These Dark Triads are characterised by narcissism, Machiavellianism, and psychopathy. They’re manipulative and harmful, and making up about 7% of the population — although interestingly significantly more of the male prison population.

Brooks talks about how to spot and avoid them in the workplace and on social media, and how to gravitate towards ‘Light Triad’ personalities instead. These embody positive traits like faith in humanity and humanism, and represent a more uplifting aspect of human nature. Thankfully, Light Triads are more common in the general population.

DALL-E 3 created image showing light and dark
As far as the workplace is concerned, scholars have found that narcissists tend toward artistic, creative, and social careers; researchers also saw that Machiavellians, in particular, avoid careers that involve caring for others. Look out for Dark Triads, in other words, in professions that involve human contact, performance, and applause, but little concerned attention to other people. An obvious example might be politics; another would be show business. But the type can manifest in many careers and professions. At work, such individuals tend to exaggerate their own worth, show a distrustful attitude toward colleagues, act impulsively and irresponsibly, break rules, and lie.

[…]

The traits to look for are self-importance, a sense of entitlement, vanity, a victim mentality, a tendency to bend the truth or even openly lie, manipulativeness, grandiosity, a lack of remorse, and an absence of empathy. Probe for these characteristics particularly when on first dates and in job interviews. You might even want to take that test imaginatively on behalf of someone you suspect may have Triad traits and see what result you get.

Source: The Sociopaths Among Us—And How to Avoid Them | The Atlantic

Image: DALL-E 3

The 9-5 shift is a relatively recent invention

As a Xennial, I have all of the guilt for not working hard enough — along with a desire to live a life more fulfilling and holistic than my parents. Generations below, including Gen Z and then of course my kids, think that working all of the hours is a bit crazy.

This article is about a viral TikTok video of a Gen Z ‘girl’ (although surely ‘young woman’?) crying because the 9-5 grind is “crazy… How do you have friends? How do you have time for dating? I don’t have time for anything, I’m so stressed out.”

It’s easy, as with so many things, for older generations to inflict on generations coming after them the crap that they themselves have had to deal with. But it doesn’t have to be this way. As the article says, the 9-5 job is a relatively recent invention and I, for one, don’t follow that convention.

Someone sitting back in a chair with a BBQ in the middle of a cubicle office
When the video – which has been viewed nearly 50 million times across TikTok and Twitter – first started to spread, the comments weren’t sympathetic. She was trashed by neoliberal hustle and grind stans – most of whom seemed old enough to be her parents. “Gen Z girl finds out what a real job is like,” one X (formerly Twitter) user sneered. “Grown-ups don’t prioritise friends, or dating. Grown-ups prioritise being able to provide,” another commenter wrote, neglecting the fact that if you’re young, single, and have no friends, there isn’t really anyone to “provide” for.

But then the tide began to turn. People started to point out that “Gen Z girl” was right, actually. Work sucks! No one has any time for anything! Within days, she had become the figurehead for an increasingly common sentiment: We don’t want our lives to revolve around work anymore.

[…]

It doesn’t feel like an exaggeration to say young people have been gaslit by older generations when it comes to work. As wages stagnate and costs rise, the generation that got free university education and cheap housing have somehow convinced young people that if we’re sad and stressed then it’s simply a problem with our work ethic. We’re too sensitive, entitled, or demanding to hold down a “real job”, the story goes, when really most of us just want a decent night’s sleep and less debt.

[…]

It’s always worth reminding ourselves that the 9-5 shift is itself a relatively recent invention, not some sort of eternal truth, and hopefully soon we’ll see it as a relic from a bygone age. “It was set up to support our patriarchal society – men went to work and women stayed at home to cook and look after the family,” says Emma Last, founder of the workplace wellbeing programme Progressive Minds. “Things have obviously changed a lot since then, and we’re trying to find the balance between cooking meals, looking after ourselves, spending time with family and friends, and having relationships. Isn’t it a good thing that Gen Z are questioning it all?”

Source: Nobody Wants Their Job to Rule Their Lives Anymore | VICE

Towards an epistemology of the humanities

Lorraine Daston highlights the lack of a systematic approach to knowledge (epistemology) in the humanities, unlike in the sciences. This gap affects the perception and value of the humanities in education and society. Daston suggests the emerging field of the history of the humanities could lead to exploring this area, stressing the importance of developing an epistemology of the humanities to validate its methods and significance.

Sadly, it’s this perceived lack of ‘rigour’ which means that humanities departments, whose alumni are needed more than ever in the world of technology, tend to be cut and defunded compared to more ‘scientific’ faculty areas.

DALL-E 3 image: An abstract representation of the concept of epistemology in the humanities and sciences.
In the past decade a new field called the history of the humanities has been assembled out of pieces previously belonging to the history of learning, disciplinary histories, the history of science, and intellectual history. The new specialty tends to be more widely cultivated in languages that had never narrowed their vernacular cognates of the Latin scientia to refer only to the natural sciences, such as those of Dutch and German. So far, its practitioners have not been particularly interested in questions of epistemology. But just as the history of science has long served as a stimulus and sparring partner to the philosophy of science, perhaps the history of the humanities will eventually engender a philosophical counterpart. Even if it did, though, the question would remain: What would be the point? Just as many scientists query the need for an epistemology of science, many humanists may find an epistemology of the humanities superfluous: we know how to do what we do, and we’ll just get on with it, thank you very much.

I’m not so sure we really know how we know what we know. And even if we did, a great number of intelligent, well-educated people, our ideal readers and potential students, even our colleagues in other departments, wonder why what we teach and write counts as knowledge. The first step in justifying our ways of knowing to these doubters would be to justify them to ourselves.

Source: How We Know What We Know | In the Moment

Image: DALL-E 3

More like Grammarly than Hal 9000

I’m currently studying towards an MSc in Systems Thinking and earlier this week created a GPT to help me. I fed in all of the course materials, being careful to check the box saying that OpenAI couldn’t use it to improve their models.

It’s not perfect, but it’s really useful. Given the extra context, ChatGPT can not only help me understand key concepts on the course, but help relate them more closely to the overall context.

This example would have been really useful on the MA in Modern History I studied for 20 years ago. Back then, I was in the archives with primary sources such as minutes from the meetings of Victorians discussing educational policy, and reading reports. Being able to have an LLM do everything from explain things in more detail, to guess illegible words, to (as below) creating charts from data would have been super useful.

AI converting scanned page with numbers into a bar chart
The key thing is to avoid following the path of least resistance when it comes to thinking about generative AI. I’m referring to the tendency to see it primarily as a tool used to cheat (whether by students generating essays for their classes, or professionals automating their grading, research, or writing). Not only is this use case of AI unethical: the work just isn’t very good. In a recent post to his Substack, John Warner experimented with creating a custom GPT that was asked to emulate his columns for the Chicago Tribune. He reached the same conclusion.

[…]

The job of historians and other professional researchers and writers, it seems to me, is not to assume the worst, but to work to demonstrate clear pathways for more constructive uses of these tools. For this reason, it’s also important to be clear about the limitations of AI — and to understand that these limits are, in many cases, actually a good thing, because they allow us to adapt to the coming changes incrementally. Warner faults his custom model for outputting a version of his newspaper column filled with cliché and schmaltz. But he never tests whether a custom GPT with more limited aspirations could help writers avoid such pitfalls in their own writing. This is change more on the level of Grammarly than Hal 9000.

In other words: we shouldn’t fault the AI for being unable to write in a way that imitates us perfectly. That’s a good thing! Instead, it can give us critiques, suggest alternative ideas, and help us with research assistant-like tasks. Again, it’s about augmenting, not replacing.

Source: How to use generative AI for historical research | Res Obscura

Overemployment as anti-precarity strategy

Historically, the way we fought back against oppressive employers and repressive regimes was to band together into unions. The collective bargaining power would help improve conditions and pay.

These days, in a world of the gig economy and hyper-individualism, that kind of collectivisation is on the wane. Enter remote workers deciding to take matters into their own hands, working multiple full-time jobs and being rewarded handsomely.

It’s interesting to notice that it seems to be very much a male, tech worker thing though. Of course, given that this was at the top of Hacker News, it will be used as an excuse to even more closely monitor the 99% of remote workers who aren’t doing this.

Person with cup of coffee between two working desks
Holding down multiple jobs has long been a backbreaking way for low-wage workers to get by. But since the pandemic, the phenomenon has been on the rise among professionals like Roque, who have seized on the privacy provided by remote work to secretly take on two or more jobs — multiplying their paychecks without working much more than a standard 40-hour workweek. The move is not only culturally taboo, but it's also a fireable offense — one that could expose the cheaters to a lawsuit if they're caught. To learn their methods and motivations, I spent several weeks hanging out among the overemployed online. What, I wondered, does this group of W-2 renegades have to tell us about the nature of work — and of loyalty — in the age of remote employment?

[…]

The OE hustlers have some tried-and-true hacks. Taking on a second or third full-time job? Given how time-consuming the onboarding process can be, you should take a week or two of vacation from your other jobs. It helps if you can stagger your jobs by time zone — perhaps one that operates during New York hours, say, and another on California time. Keep separate work calendars for each job — but to avoid double-bookings, be sure to block off all your calendars as soon as a new meeting gets scheduled. And don’t skimp on the tech that will make your life a bit easier. Mouse jigglers create the appearance that you’re online when you’re busy tending to your other jobs. A KVM switch helps you control multiple laptops from the same keyboard.

Some OE hustlers brag about shirking their responsibilities. For them, being overemployed is all about putting one over on their employers. But most in the community take pride in doing their jobs, and doing them well. That, after all, is the single best way to avoid detection: Don’t give your bosses — any of them — a reason to become suspicious.

[…]

The consequences for getting caught actually appear to be fairly low. Matthew Berman, an employment attorney who has emerged as the unofficial go-to lawyer in the OE community, hasn’t encountered anyone who has been hit with a lawsuit for holding a second job. “Most of the time, it’s not going to be worth suing an employee,” he says. But many say the stress of the OE life can get to you. George, the software engineer, has trouble sleeping at night because of his fear of getting caught. Others acknowledge that the rigors of juggling multiple jobs have hurt their marriages. One channel on the OE Discord is dedicated to discussions of family life, mostly among dads with young kids. People in the channel sometimes ask for relationship advice, and the responses they get from the other dads are sweet. “Your regard for your partner,” one person advised of marriage, “should outweigh your desire for validation."

Source: ‘Overemployed’ Workers Secretly Juggle Several Jobs for Big Salaries | Business Insider

There are better approaches than just having no friends at work

We get articles like this because we live in a world inescapably tied to neoliberalism and hierarchical ways of organising work. I’m sure the advice to “not make friends at work” is stellar survival advice in a large company, but it’s not the best way to ensure human flourishing.

I’ve definitely been burned by relationships at work, especially earlier in my career when managers use the ‘family’ metaphor. Thankfully, there’s a better way: own your own business with your friends! Then you can bring your full self to work, which is much like having your cake and eating it, too.

Image created by DALL-E 3 with the prompt: "An image illustrating the concept of maintaining clear boundaries at work. The scene shows a professional office environment where individuals of diverse backgrounds interact with respect and professionalism. A distinct physical separation, like a glass wall or a clear line on the floor, symbolizes the clear boundaries between personal and professional lives. The environment conveys a sense of order, efficiency, and a healthy work-life balance, emphasizing the importance of keeping these aspects distinct."
Real friends are people you can be yourself around and with whom you can show up being who you truly are—no editing needed. They are folks with whom you have developed a deep relationship over time that is mutual and flows in two ways. You are there for them and they are there for you. There is trust built.

At work, this relationship becomes very, very complex. Instead of being a true friendship, what ends up happening is that the socio-economic realities of your workplace come into play—and most often that poisons the well. When money is involved, it clouds any potential friendship. It makes the lines so blurry between real and contrived friendships that the waters become too murky to make clear and meaningful relationships. Is that a real friend, or do they want something from me that benefits them? Who can you really trust at work and what happens if they violate your trust? Is my boss really my friend or are they just trying to get me to work harder/longer/faster?

If, instead, we keep clear boundaries at work, we never fall into the trap of worrying about whom to trust and who has our best interest in mind. It prevents us from transferring our best interests to anyone else simply because we assume they are our friends. Why give that amazing power to someone else at work only to be disappointed?

Worse yet, people will often confuse co-workers with family, falling into the trap of having a “work mom,” “work dad,” or even a “work husband” or “work wife.” This can lead to a number of disastrous results that are well-documented, as family is not the same as work, and confusing the two has long-lasting ramifications that can stifle career success and lead to unethical behaviour. Keeping boundaries clear and your work life separate from your private life will help to alleviate this potential downfall and keep you focused on what really matters: the work.

Source: Why You Shouldn’t Make Friends at Work | Psychology Today Canada

Image: DALL-E 3

 

Building a system for success, without the glitches

Wise words from Seth Godin. It’s a twist on the advice to stop doing things that maybe used to work but don’t any more. The ‘glitch’ he’s talking about here isn’t just in terms of what might not be working for you or your organisation, but for society and humanity as a whole.

An image showing moths being irresistibly attracted to a bright light in a dark environment. Some moths are joyfully flying towards the light, while others are caught in a bug trap near the light source. This represents the idea of being drawn to something that seems beneficial but is actually harmful, a metaphor for systemic glitches or cultural traps.

Many moths are attracted to light. That works fine when it’s a bright moon and an open field, but not so well for the moths if the light was set up as a bug trap.

Processionary caterpillars follow the one in front until their destination, even if they’re arranged in a circle, leading them to march until exhaustion.

It might be that you have built a system for your success that works much of the time, but there’s a glitch in it that lets you down. Or it might be that we live in a culture that creates wealth and possibility, but glitches when it fails to provide opportunity to others or leaves a mess in our front yards.

Source: Finding the glitch | Seth’s Blog

Image: DALL-E 3

Is the only sustainable growth 'degrowth'?

This article by Noah Smith gave me pause for thought. There’s plenty of people talking about ‘degrowth’ at the moment and, I have to say, that I don’t know enough to have an opinion.

It’s really easy to get swept up in what other people who broadly share your outlook on life are sharing and discussing. While I definitely agree that ‘growth at all costs’ is problematic, and that ‘green growth’ is probably a sticking plaster, I’m not sure that ‘degrowth’ (as far as I understand it) is the answer?

Perhaps I need to do more reading. If it’s trying to measure things differently rather than just using GDP, then I’ve already written that I’m in favour. But just like calls to ‘abolish the police’ I’m not sure I can go fully along with that. Sorry.

I don’t want to beat this point to death, but I think it’s important to emphasize how unpleasant and inhumane a degrowth future would look like. People in rich countries would be forced to accept much lower standards of living, while people in developing countries would have a far more meager future to look forward to. This situation would undoubtedly cause resentment, leading to a backlash against the leaders who had mandated mass poverty. After the overthrow of degrowth regimes, we’d see the pendulum swing entirely toward leaders who promised infinite resource consumption, at which point the environment would be worse off than before. And this is in addition to the fact that degrowth would make it more difficult to invest in green energy and other technologies that protect the environment.

So while I think we do need to worry about the potential negative consequences of growth and try our best to ameliorate those harms, I think trying to impoverish ourselves to save the environment would be a catastrophic mistake, for both us and for the environment. This is not something any progressive ought to fight for.

Source: Yes, it’s possible to imagine progressive dystopias | Noahpinion

If you need a cheat sheet, it's not 'natural language'

Benedict Evans, whose post about leaving Twitter I featured last week, has written about AI tools such as ChatGPT from a product point of view.

He makes quite a few good points, not least that if you need ‘cheat sheets’ and guides on how to prompt LLMs effectively, then they’re not “natural language”.

DALL-E 3 image created with prompt: "This image will juxtapose two scenarios: one where a user is frustrated with a voice assistant's limited capabilities (like Alexa performing basic tasks), and another where a user is amazed by the vast potential of an LLM like ChatGPT. The metaphor here is the contrast between limited and limitless potential. The image will feature a split scene: on one side, a user looks disappointedly at a simple smart speaker, and on the other side, the same user is interacting with a dynamic, holographic AI, showcasing the broad capabilities of LLMs."
Alexa and its imitators mostly failed to become much more than voice-activated speakers, clocks and light-switches, and the obvious reason they failed was that they only had half of the problem. The new machine learning meant that speech recognition and natural language processing were good enough to build a completely generalised and open input, but though you could ask anything, they could only actually answer 10 or 20 or 50 things, and each of those had to be built one by one, by hand, by someone at Amazon, Apple or Google. Alexa could only do cricket scores because someone at Amazon built a cricket scores module. Those answers were turned back into speech by machine learning, but the answers themselves had to be created by hand. Machine learning could do the input, but not the output.

LLMs solve this, theoretically, because, theoretically, you can now not just ask anything but get an answer to anything.

[…]

This is understandably intoxicating, but I think it brings us to two new problems - a science problem and a product problem. You can ask anything and the system will try to answer, but it might be wrong; and, even if it answers correctly, an answer might not be the right way to achieve your aim. That might be the bigger problem.

[…]

Right now, ChatGPT is very useful for writing code, brainstorming marketing ideas, producing rough drafts of text, and a few other things, but for a lot of other people it looks a bit like those PCs ads of the late 1970s that promised you could use it to organise recipes or balance your cheque book - it can do anything, but what?

Source: Unbundling AI | Benedict Evans

Cosplaying adulthood

I discovered this article published at The Cut while browsing Hacker News. I was immediately drawn to it, because one of the main examples it uses is ‘cosplaying’ adulthood while at kids' sporting events.

There’s a few things to say about this, in my experience. The first is that status tends to be conferred by how good your kid is, no matter what your personality. Over and above that, personal traits — such as how funny you are — make a difference, as does how committed and logistically organised you are. And if you can’t manage that, you can always display appropriate wealth (sports kit, the car you drive). Crack all of this, and congrats! You’ve performed adulthood well.

I’m only being slightly facetious. The reason I can crack a wry smile is because it’s true, but also I don’t care that much because I’ve been through therapy. Knowing that it’s all a performance is very different to acting like any of it is important.

It’s impressive how much parents’ beliefs can seep in, especially the weird ones. As an adult, I’ve found myself often feeling out of place around my fellow parents, because parenthood, as it turns out, is a social environment where people usually want to model conventional behavior. While feeling like an interloper among the grown-ups might have felt hip and righteous in my dad’s day, it makes me feel like a tool. It does not make me feel like a “cool mom.” In the privacy of my own home, I’ve got plenty of competence, but once I’m around other parents — in particular, ones who have a take-charge attitude — I often feel as inept as a wayward teen.

The places I most reliably feel this way include: my kids’ sporting events (the other parents all seem to know each other, and they have such good sideline setups, whereas I am always sitting cross-legged on the ground absentmindedly offering my children water out of an old Sodastream bottle and toting their gear in a filthy, too-small canvas tote), parent-teacher meetings, and picking up my kids from their friends’ suburban houses with finished basements.

I’ve always assumed this was a problem unique to people who came from unconventional families, who never learned the finer points of blending in. But I’m beginning to wonder if everyone feels this way and that “the straight world,” or adulthood, as we call it nowadays, is in fact a total mirage. If we’re all cosplaying adulthood, who and where are the real adults?

Source: Adulthood Is a Mirage | The Cut

You'll be hearing a lot more about nodules

It was only this year that I first heard about nodules, rock-shaped objects formed over millions of years on the sea bed which contain rare earth minerals. We use these for making batteries and other technologies which may help us transition away from fossil fuels.

However, deep-sea mining is, understandably, a controversial topic. At a recent summit of the Pacific Islands Forum, The Cook Islands' Prime Minister outlined his support for exploration and highlighted its potential by gifting seabed nodules to fellow leaders.

This, of course, is a problem caused by capitalism, and the view that the natural world is a resource to be exploited by humans. We’re talking about something which is by definition a non-renewable resource. I think we need to tread (and dive) extremely carefully.

What’s black, shaped like a potato and found in the suitcases of Pacific leaders when they leave a regional summit in the Cook Islands this week? It’s called a seabed nodule, a clump of metallic substances that form at a rate of just centimetres over millions of years.

Deep-sea mining advocates say they could be the answer to global demand for minerals to make batteries and transform economies away from fossil fuels. The prime minister of the Cook Islands, Mark Brown, is offering nodules as mementos to fellow leaders from the Pacific Islands Forum (Pif), a bloc of 16 countries and two territories that wraps up its most important annual political meeting on Friday.

[…]

“Forty years of ocean survey work suggests as much as 6.7bn tonnes of mineral-rich manganese nodules, found at a depth of 5,000m, are spread over some 750,000 square kilometres of the Cook Islands continental shelf,” [the Cook Islands Seabed Minerals Authority] says.

Source: Here be nodules: will deep-sea mineral riches divide the Pacific family? | Deep-sea mining | The Guardian

Our ancestors were using complex tools and woodworking approaches almost half a million years ago

Nature reports that, at the Kalambo Falls archaeological site in Zambia, researchers have unearthed the earliest known examples of woodworking — dating back at least 476,000 years. This is a significant find as it includes two logs interlocked by a hand-cut notch, a method previously unseen in early human history. The discovery also features four other wood tools: a wedge, digging stick, cut log, and a notched branch. These artifacts demonstrate early humans' advanced skills in shaping wood for various purposes, challenging the traditional view that early hominins primarily used stone tools.

I’ve also never heard of the approach that the team used: luminescence dating. This is a method that helps determine when the wood was last exposed to light, and various wood analysis techniques.

The findings, especially the interlocked logs, suggest that early humans had the capability to construct large structures and manipulate wood in complex ways. It’s a groundbreaking discovery, as it not only pushes back the timeline of woodworking in Africa but also sheds new light on the cognitive abilities and technological diversity of our early ancestors. Amazing.

Wooden tools

The Quaternary sequence is a 9-m-deep exposure above the Kalambo River (BLB1 is a geological section). Sediments are fluvial sands and gravels with occasional, discontinuous beds of fine sands, silts and clays with wood preserved in the lowermost 2 m.... A permanently elevated water table has preserved wood and plant remains (Supplementary Information Section 1). The depositional sequence is typical of a high- to moderate-energy sandbed river that underwent lateral migration. The sands are dominated by a lower unit of horizontal bedding and an upper unit of planar/trough cross-bedding. Upper and lower sand units are separated by fine sands, silts and clays with plant material deposited in still water after the river migrated/avulsed elsewhere in the floodplain. Wood is deposited in this environment either through anthropogenic emplacement, or naturally transported in the flow, and snagged on sand bedforms.

[…]

Sixteen samples for dating were collected at Site BLB by hammering opaque plastic tubes into the sediment. A combination of field gamma spectrometry, laboratory alpha and beta counting and geochemical analyses were used to determine radionuclide content, and the dose rate and age calculator42 was used to calculate radiation dose rate. Sand-sized grains (approximately 150 to 250 µm in diameter) of quartz and potassium-rich feldspar were isolated under red-light conditions for luminescence measurements and measured on Risø TL/OSL instruments using single-aliquot regenerative dose protocols. Single-grain quartz OSL measurements dated sediments younger than around 60 kyr, but beyond this age the OSL signal was saturated. pIR IRSL measurements of aliquots consisting of around 50 grains of potassium-rich feldspars were able to provide ages for all samples collected. The pIR IRSL signal yielded an average value for anomalous fading of 1.46 ± 0.50% per decade. Where quartz OSL and feldspar pIR IRSL were applied to the same samples, the ages were consistent within uncertainties without needing to correct for anomalous fading. The conservative approach taken here has been to use ages without any correction for fading. If a fading correction had been applied then the ages for the wooden artefacts would be older.

Source: Evidence for the earliest structural use of wood at least 476,000 years ago | Nature

Pufflings can't resist the bright lights of the city

I haven’t seen puffins in real life very often, but they’re associated with the Farne Islands off the coast of Northumberland, my home county. They’re a bird associated with more northern climes, and are enigmatic creatures.

It’s both sad and heartening to see that, to save them going extinct in Iceland, locals have to stop them wandering towards the bright lights of human civilization. Instead, they take the baby puffins, which are adorably called ‘pufflings’, and throw them off cliffs to encourage them to fly.

Natural evolution can’t happen as fast as humans are changing the world, so unless we want to see the absolute devastation of biodiversity on our planet, traditions such as this are going to have to become commonplace.

Puffling being held by human
Watching thousands of baby puffins being tossed off a cliff is perfectly normal for the people of Iceland's Westman Islands.

This yearly tradition is what’s known as “puffling season” and the practice is a crucial, life-saving endeavor.

The chicks of Atlantic puffins, or pufflings, hatch in burrows on high sea cliffs. When they’re ready to fledge, they fly from their colony and spend several years at sea until they return to land to breed, according to Audubon Project Puffin.

Pufflings have historically found the ocean by following the light of the moon, digital creator Kyana Sue Powers told NPR over a video call from Iceland. Now, city lights lead the birds astray.

[…]

Many residents of Vestmannaeyjar spend a few weeks in August and September collecting wayward pufflings that have crashed into town after mistaking human lights for the moon. Releasing the fledglings at the cliffs the following day sets them on the correct path.

This human tradition has become vital to the survival of puffins, Rodrigo A. Martínez Catalán of Náttúrustofa Suðurlands [South Iceland Nature Research Center] told NPR. A pair of puffins – which mate for life – only incubate one egg per season and don’t lay eggs every year.

“If you have one failed generation after another after another after another,” Catalán said, “the population is through, pretty much."

Source: During puffling season, Icelanders save baby puffins by throwing them off cliffs | NPR

Co-Intelligence, GPTs, and autonomous agents

The big technology news this past week has been OpenAI, the company behind ChatGPT and DALL-E, announcing the availability of GPTs. Confusing naming aside, this introduces the idea of anyone being able to build ‘agents’ to help them with tasks.

Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, is somewhat of an authority in this area. He’s posted on what this means in practice, and gives some examples.

Mollick has a book coming out next April, called Co-Intelligence which I’m looking to reading. For now, I’d recommend adding his newsletter to those that you read about AI (along with Helen Beetham’s, of course).

The easy way to make a GPT is something called GPT Builder. In this mode, the AI helps you create a GPT through conversation. You can also test out the results in a window on the side of the interface and ask for live changes, creating a way to iterate and improve your work. This is a very simple way to get started with prompting, especially useful for anyone who is nervous or inexperienced. Here, I created a choose-your-own adventure game by just asking the AI to make one, and letting it ask me questions about what else I wanted.

[…]

So GPTs are easy to make and very powerful, though they are not flawless. But they also have two other features that make them useful. First, you can publish or share them with the world, or your organization (which addresses my previous calls for building organizational prompt libraries, which I call grimoires) and potentially sell them in a future App Store that OpenAI has announced. The second thing is that the GPT starts seemlessly from its hidden prompt, so working with them is much more seamless than pasting text right into the chat window. We now have a system for creating GPTs that can be shared with the world.

[…]

In their reveal of GPTs, OpenAI clearly indicated that this was just the start. Using that action button you saw above, GPTs can be easily integrated into with other systems, such as your email, a travel site, or corporate payment software. You can start to see the birth of true agents as a result. It is easy to design GPTs that can, for example, handle expense reports. It would have permission to look through all your credit card data and emails for likely expenses, write up a report in the right format, submit it to the appropriate authorities, and monitor your bank account to ensure payment. And you can imagine even more ambitious autonomous agents that are given a goal (make me as much money as you can) and carry that out in whatever way they see fit.

You can start to see both near-term and farther risks in this approach. In the immediate future, AIs will become connected to more systems, and this can be a problem because AIs are incredibly gullible. A fast-talking “hacker” (if that is the right word) can convince a customer service agent to give a discount because the hacker has “super-duper-secret government clearance, and the AI has to obey the government, and the hacker can’t show the clearance because that would be disobeying the government, but the AI trusts him right…” And, of course, as these agents begin to truly act on their own, even more questions of responsibility and autonomous action start to arise. We will need to keep a close eye on the development of agents to understand the risks, and benefits, of these systems.

Source: Almost an Agent: What GPTs can do | Ethan Mollick

Small sufferings

As I’ve mentioned sporadically for over a decade, I have a cold shower every morning. Not only is it good for mental health, but it’s a way of adding a small bit of suffering into my life.

That might sound like an odd thing to do, but study after study shows that it’s the difference between our experiences that provide pleasure or pain. Humans can adapt to anything, and I believe my days are better by starting them off with a small amount of suffering.

This post riffs on that idea, and as someone who’s no stranger to wild camping in the snow, I can definitely attest to daily cold showers being more effective than one-off trips for building resilience!

Shower in the middle of a landscape
I suspect that small sufferings spread out across time are more helpful; that, for example, a 10 minute cold shower each day would make me feel more total gratitude for my cozy life than a one-week cold camping trip once per year – life is long and memory is short, so the cold trip would probably fade from memory after a week or two. But it's strangely hard to force myself to suffer, even if it's for my own good.
Source: Optimal Suffering

Image: Unsplash

Twitter now feels like the Brewster’s Millions of tech

I’d like to share two ‘leaving Twitter’ posts I came across yesterday. Theyoccupy somewhat opposite ends of the spectrum in terms of reasons for the decision. One is cold and rational, as befits an analyst like Benedict Evans. The other is more passionate and emotional, as you’d expect from someone like Douglas Rushkoff.

Black mug featuring white Twitter logo

Let’s take Evans first, who writes:

...The last year swapped stasis for chaos. Stuff breaks at random and you don’t know if it’s a bug or a decision. The advertisers have fled, and no-one knows what will be broken by accident or on purpose tomorrow. The example that’s closest to home for me was that the in-house newsletter product was shut down - and then links to other newsletters were banned. Pick one! It’s hard to see anyone who depends on having a long-term platform investing in anything that Twitter builds, when it might not be there tomorrow.

There are various diagnoses for this. Tesla has sometimes been run in chaos as well, but the pain of that is on the employees, not the customers: you can’t wake up in the middle of the night and decide the car should have five wheels and ship that the next day, but you can make those kinds of decisions in software, and Elon Musk does, all the time. Perhaps it’s a fundamental failure to understand how you run a community. Or something else. But whatever the explanation, Twitter now feels like the Brewster’s Millions of tech - ‘Watch One Man Turn $40bn Into $4 In 24 Months!’

I couldn’t really care less about Twitter’s business model, although I did see the writing on the wall in 2014 when I wrote about what I call ‘software with shareholders’. So poor platform decisions don’t really move me.

What I am concerned about is reputational damage. Which is why our co-op’s Twitter/X account has been mothballed and will be deleted in January 2024. Being associated with a toxic brand is never a good idea.

So let’s move onto Rushkoff, who starts writing about Twitter/X but ends up (perhaps unhelpfully) generalising:

The uniquely destabilizing aspect of these platforms is that there’s no friction. There are no moderating influences. It’s a bit like running on ice. You go in a certain direction, and then you can’t stop. You just keep sliding in that direction. That’s what happens with social media. There’s no friction, no moderation, no balance. Every idea ends up rushing sliding towards its absolute conclusion immediately. So ideas in progress, things that maybe could be considered together — they end up just going to their logical extremes.

[…]

That frictionless quality of this space untethers its users from reality. It’s like an acid trip where the hallucinations can become more compelling than the real. Every thought spins out and magnifies. If you have a fear, it’s as if it is just conjured into reality. Without an intentional set and setting for such an acid trip, one can easily get lost in the turbulence.

I think Rushkoff makes a great point about “ideas in progress”. It used to be the case, before everyone arrived on social media, that you could share things that were unfinished, works in progress, half-baked ideas. These days, people are held to account for things they’ve posted over a decade earlier, as if people don’t learn and grow.

I’m pleased to have made the decision a couple of years ago to leave Twitter complete, and to have done so without much fanfare. There are much better spaces to be online, usually in the dark forests. But there are more public places, too. The Fediverse (where you can find me on social.coop, among other spaces) continues to be a good experience for me. More recently, I’ve found Substack Notes to be pretty great.

People used to describe Twitter like a café or bar where you could get involved with, and overhear, great conversations. To extend that analogy, sometimes a bar gets overrun with the wrong kind of person, and so people with any kind of taste move on. It seems like that’s what’s happened now with Twitter.

Sources:

Bill Gates on why AI agents are better than Clippy

While there’s nothing particularly new in this post by Bill Gates, it’s nevertheless a good one to send to people who might be interested in the impact that AI is about to have on society.

Gates compares AI agents to Clippy which, he says, was merely a bot. After going through all of the advantages there will be to AI agents acting on your behalf, Gates does, to his credit, talk about privacy implications. He also touches on social conventions and how human norms interact with machine efficiency.

The thing that strikes me in all of this is something that Audrey Watters discussed a few months ago in relation to fitness technologies: will these technologies make us more like to live ‘templated lives’. In other words, are they helping support human flourishing, or nudging us towards lives that make more revenue for advertisers, etc.?

Agents will affect how we use software as well as how it’s written. They’ll replace search sites because they’ll be better at finding information and summarizing it for you. They’ll replace many e-commerce sites because they’ll find the best price for you and won’t be restricted to just a few vendors. They’ll replace word processors, spreadsheets, and other productivity apps. Businesses that are separate today—search advertising, social networking with advertising, shopping, productivity software—will become one business.

[…]

How will you interact with your agent? Companies are exploring various options including apps, glasses, pendants, pins, and even holograms. All of these are possibilities, but I think the first big breakthrough in human-agent interaction will be earbuds. If your agent needs to check in with you, it will speak to you or show up on your phone. (“Your flight is delayed. Do you want to wait, or can I help rebook it?”) If you want, it will monitor sound coming into your ear and enhance it by blocking out background noise, amplifying speech that’s hard to hear, or making it easier to understand someone who’s speaking with a heavy accent.

[…]

But who owns the data you share with your agent, and how do you ensure that it’s being used appropriately? No one wants to start getting ads related to something they told their therapist agent. Can law enforcement use your agent as evidence against you? When will your agent refuse to do something that could be harmful to you or someone else? Who picks the values that are built into agents?

[…]

But other issues won’t be decided by companies and governments. For example, agents could affect how we interact with friends and family. Today, you can show someone that you care about them by remembering details about their life—say, their birthday. But when they know your agent likely reminded you about it and took care of sending flowers, will it be as meaningful for them?

Source: AI is about to completely change how you use computers | Bill Gates

The fragmentation of the (social) web

These days, I lean heavily on Ryan Broderick’s Garbage Day newsletter to know what’s going on in the areas of social media I don’t pay much attention to. In other words, TikTok, Instagram, and… well, most of it.

However, as Broderick himself points out, nobody really knows what’s going on, and there is no centre, due to the fragmentation of the (social) web. This used to be called ‘balkanization’ but because the 1990s is a long time ago, Broderick has coined the term ‘the Vapor Web’. He claims we’re in a ‘post-viral’ time.

I don’t think ‘The Vapor Web’ will catch on as a term, though. At least not amongst British people and Canadians. We like our ‘u’ too much ;)

An abstract representation of the 'fragmentation of the internet'.
My big unified theory of the internet is that the way we use the web is constantly being redefined by conflict and disaster. I brought this up in an interview with Bloomberg last month. If you look back at particularly big years for the web — 2001, the stretch from 2010 to 2012, 2016, 2020, etc. — you typically find moments of big global upheaval arriving right as a suite of new digital tools reach an inflection point with users. Then, suddenly, we have a new way of being online.

Unlike previous global conflicts, however, this time around, the defining narrative about online behavior is not just that there is, seemingly, an absence of it, but that it also still, partially, works the way it did 10 years ago. Every millennial is experiencing an overwhelming feeling that, as WIRED recently wrote, “first-gen social media users have nowhere to go,” but that’s not actually true. It’s just that TikTok is where everyone is and TikTok doesn’t work like Facebook or even YouTube. Which is why the White House is agonizing over the popularity of TikTok hashtags right now instead of canceling my student loan debt.

[…]

Let’s do one more, to bring us back to Israel and Palestine. In the last 120 days, the #Israel hashtag has been used around 220,000 times and been viewed three billion times. The #Palestine hashtag has been used 230,000 times and has been viewed around two billion times. Yes, Palestine is slightly more popular on TikTok, but nothing out of line with what outlets like NPR have found by, you know, actually polling Americans along political and generational lines. To say nothing of how minuscule these numbers are when compared to how large TikTok is.

Which is to say that the internet doesn’t make sense in aggregate anymore and trying to view it as a monolith only gives you bad, confusing, and, oftentimes, wrong impressions of what’s actually going on.

The best descriptions of the current state of the web right now were both actually published months before the fighting in the Middle East broke out and written about a completely different topic. Semafor’s Max Tani coined the term, “the fragmentation election,” which was a riff on writer John Herrman’s similar idea, the “nowhere election”. Tani points to declining media institutions and dying platforms as the culprit for all the amorphousness online. And Herrman latches on podcasts and indie media. Both are true, but I think those are all just symptoms. And so, to piggyback off both of them, and go a bit broader (as I typically do), I’m going to call our current moment the Vapor Web. Because there is actually more internet with more happening on it — and with bigger geopolitical stakes — than ever before. And yet, it’s nearly impossible to grab ahold of it because none of it adds up into anything coherent. Simply put, we’re post-viral now.

Source: Is the web actually evaporating? | Garbage Day

Image: DALL-E 3

AI generated images in a time of war

It’s one thing user-generated content being circulated around social media for the purposes of disinformation. It’s another thing entirely when Adobe’s stock image marketplace is selling AI-generated ‘photos’ of destroyed buildings in Gaza.

This article in VICE includes a comment from an Adobe spokesperson who references the Content Authenticity Initiative. But this just puts the problem on the user rather than the marketplace. People looking to download AI-generated images to spread disinformation, don’t care about the CAI, and will actively look for ways to circumvent it.

Screenshot of Adobe stock images site with AI-generated image titled "Destroyed buildings in Gaza town of Gaza strip in Israel, Affected by war."
Adobe is selling AI-generated images showing fake scenes depicting bombardment of cities in both Gaza and Israel. Some are photorealistic, others are obviously computer-made, and at least one has already begun circulating online, passed off as a real image.

As first reported by Australian news outlet Crikey, the photo is labeled “conflict between Israel and palestine generative ai” and shows a cloud of dust swirling from the tops of a cityscape. It’s remarkably similar to actual photographs of Israeli airstrikes in Gaza, but it isn’t real. Despite being an AI-generated image, it ended up on a few small blogs and websites without being clearly labeled as AI.

[…]

As numerous experts have pointed out, the collapse of social media and the proliferation of propaganda has made it hard to tell what’s actually going on in conflict zones. AI-generated images have only muddied the waters, including over the last several weeks, as both sides have used AI-generated imagery for propaganda purposes. Further compounding the issue is that many publicly-available AI generators are launched with few guardrails, and the companies that build them don’t seem to care.

Source: Adobe Is Selling AI-Generated Images of Violence in Gaza and Israel | VICE

The Societal Side-eye

I’ll turn 43 next month. I seem to have a lot more grey hair than other people my age. Some people act towards me as if I’m old. Perhaps I am in their eyes.

Fair enough, some days I wake up and I feel a million years old, but most of the time my fitness regime means that I feel pretty awesome.

This article is about ignoring the ‘societal side-eye’ and doing badass things anyway. It’s something we all need to remember as we age: don’t be beholden to other people’s expectation of what’s appropriate.

You and I are Way Too Old to let a societal side-eye sideline us from a badass life, however we define it.

Who says we’re not supposed to even countenance the idea of learning to in-line skate. Or skateboard. Or paraglide. Or try trapeze work. Or aerial silks. Or whatever it was that got away from us as youths, and now beckons us back if we would only put in the training time. When does a timeline run out?

If we do such things, particularly if we sport grey hair, we are subjected to

“OH ISN’T THAT SO CUUUUUUUUUTE!”

[…]

Humans are a judgmental lot. We love to make fun of, mock and ridicule, especially those who are doing things we don’t have the guts to try. When some tiny Black woman well over a hundred heads out onto the track and runs a record time, we call her sweet or cute while she is engaging in serious badassery.

[…]

It’s hard enough to age. It’s far harder to age in a ageist society which is eager to denounce and mock those of us who defy expectations and insist on writing our own history, full of whatever badassery fills our hearts.

Source: You’re Too Old to Care About the Societal Side Eye When You Want to Be a Badass | Too Old for This Sh*t

The first half of life is Tetris; the second half is Jenga

I don’t think much of the poem, but I’m stealing the first line of this article as the title of this post. It’s a useful metaphor!

You can’t not fall, but you can with humility redirect your downward inertia into a meaningful lateral motion. You can also spin, and you can allow yourself to be spun.
Source: Tetris Sequence | Opaque Hourglass

Image: Unsplash

Don't tell me that hiring isn't broken

Despite the great work being done around Open Recognition, the main use case for digital credentials remains helping people get jobs. Which means that I’ve spent over a decade, on and off, being forced to think about the interface between people wanting to be hired, and those who want to hire those people.

This article talks about job seekers using AI tools to automate applications. In the example given, the system used sent 5,000 applications on behalf of someone, which landed them 20 interviews. They’d previously got the same number of interviews from manually applying to 200-300 jobs, but it was a lot less work.

Credentials are always a form of arms race if we’re always stacking them vertically like the sheets of paper in the image below. Open Recognition allows us to think about a more wide-ranging set of skills, but it requires people in HR departments to think differently. Sometimes it’s about quality over quantity.

Many job seekers will understand the allure of automating applications. Slogging through different applicant tracking systems to reenter the same information, knowing that you are likely to be ghosted or auto-rejected by an algorithm, is a grind, and technology hasn’t made the process quicker. The average time to make a new hire reached an all-time high of 44 days this year, according to a study across 25 countries by the talent solutions company AMS and the Josh Bersin Company, an HR advisory firm. “The fact that this tool exists suggests that something is broken in the process,” Joseph says. “I see it as taking back some of the power that’s been ceded to the companies over the years.”

Recruiters are less enamored with the idea of bots besieging their application portals. When Christine Nichlos, CEO of the talent acquisition company People Science, told her recruiting staff about the tools, the news raised a collective groan. She and some others see the use of AI as a sign that a candidate isn’t serious about a job. “It’s like asking out every woman in the bar, regardless of who they are,” says a recruiting manager at a Fortune 500 company who asked to remain anonymous because he wasn’t authorized to speak on behalf of his employer.

Other recruiters are less concerned. “I don’t really care how the résumé gets to me as long as the person is a valid person,” says Emi Dawson, who runs the tech recruiting firm NeedleFinder Recruiting. For years, some candidates have outsourced their applications to inexpensive workers in other countries. She estimates that 95 percent of the applications she gets come from totally unqualified candidates, but she says her applicant tracking software filters most of them out—perhaps the fate of some of the 99.5 percent of Joseph’s LazyApply applications that vanished into the ether.

Source: AI bots can do the grunt work of filling out job applications for you | Ars Technica

Accepting and trying to deal with climate as an overriding priority

I need to dig into this BBC R&D report, but it looks fascinating at first glance. I recognise the names of some of the people who were interviewed in the process of creating it, and what’s interesting to me is that they found that instead of the ‘next big thing’ in terms of technology, they found “a complex set of factors that we believe will enable and catalyse one another, sometimes in surprising and unpredictable ways”.

The most important of these, of course, was “accepting and trying to deal with climate as an overriding priority” but also identifying two types of complexity. The first is “a sense that in order to simply go about your day as a person, it’s necessary to interact with, and understand, many complex sources of information”. The second is “a sense that the overarching systems of the world like politics, finance, economics, and healthcare, are becoming more complex and difficult to understand”.

Late in 2022, we began a straightforward-sounding research project: compile a list of technologies that we should be paying attention to in BBC Research & Development over the next few years and make some recommendations about their adoption to the wider BBC. As I’m sure you’ve already guessed, things didn’t turn out quite so straightforward.

By the end of the project, we’d interviewed twenty-two people from the fields of science, economics, education, technology, design, business leadership, research, activism, journalism, and many points between. We spoke to people from both inside and outside the BBC and around the world. All of these people have a unique view on the future, and our report teases out the common themes from the interviews and compiles their ideas about how things might come to be in the near future.

We grouped the themes we identified into five sections. The first, A complex world, outlines sources of complexity and uncertainty our interviewees see in their worlds. Climate change is by far the largest and most significant of these. The next section, A divided world, also covers big-picture context and outlines some of the social and economic drivers our interviewees see playing out over the next few years. The AI boom and New interactions go into detail on specific technologies and use cases our interviewees think will be significant. Finally, The case for hope bundles up some of the reasons our interviewees see to be hopeful about the future — provided we are willing to act to bring about the changes we’d like to see in the world

Source: Projections: Things are not normal | BBC R&D

Therapy is simple

Craig Mod is a couple of months older than me, as I turn 43 just before Christmas. Like me, he’s gone through some therapy. Unlike me, he lives alone, and has continued therapy sessions for over five years.

What I like about the raw honesty of what he writes in this dispatch is how he wishes that everyone had access to therapy. Despite all of the positive messages about mental health, there’s still something of a stigma about getting some help. As if you should just “get over it”.

But therapy is part of how you become you. As Craig says, in the bit that comes after the part I’ve quoted below: “Therapy is simple. You load up FaceTime and speak out loud the things you’re most terrified about in life. Be radically open and honest, treating yourself as a third party, kindly observant without judgement."

It’s hard talking about your hopes, fears, and dreams with people you are emotionally invested in. There’s something remarkably grown-up and liberating about finally being able to start living a more flourishing life by sorting your shit out.

I’ve been thinking about aloneness recently. Well, I’ve been thinking about it my whole life. It’s difficult to remember a time where I didn’t feel alone or apart or “on my own.” And I’ve spent the majority of my adult life — from 17 onward — living mostly alone, going to bed alone, and waking up alone. Left to my own volition to somehow transmute that aloneness into forward momentum, “output,” (“content” ha ha) and positive habits.

[…]

I just turned 43 the other day. As part of the fun of embracing mid-life crises, I’m in pattern matching mode. Two decades of watching friends either pair up and start families (or just embark on paired adventures), or continue down paths of aloneness. It seems to get more and more acute — the effects of aloneness — as folks drift into their 40s. It also seems to be more and more difficult to break habits connected with aloneness the older we get. This makes sense. Habits self-reinforce. And the folks with families have less time for solo people, creating even more dissonance.

[…]

I’ve spent the last five and half years speaking weekly with a therapist in New York over FaceTime. I started because I was exhausted. I recognized toxic relationship patterns that I had held onto since my teenage years, and wanted to break free. And I recognized that I had spent roughly twenty years not being able to do that on my own. (I had made some strides, of course, in fits and starts; most notably when I was 27, then: at the lowest of lows, I began running in the middle of the night (2am, feeling like I was losing my mind, put on my shoes, and ran the silent moonlit summer streets of Tokyo until my lungs burned and I felt back on the ground), soon completing two full marathons, felt my sense of value and self-worth rise, charged more for my time, made my way to Palo Alto, worked with incredible talent, made real money, big projects, huge scale, proved to myself I wasn’t stuck — it was an incredible stretch, thinking back on it now, a stretch of life-transformational love and hugs and sense of support, all initially catalyzed by feeling more alone than ever before, a yawning endless aloneness, and wanting to crawl out of that well before someone came and sealed the top.) Back to five years ago — I was 37 and stuck and thought — OK, let’s try something new. Hence: calling in for support (finally!).

I feel guilty for having access to this therapist. I want everyone to have access to someone like this. The world would be whole if you gave everyone a talented therapist and a cat. I can’t overstate how transformational my weekly act of analysis has been. I am still broken in many obvious (and non-unique) ways. But through these weekly sessions I’ve mitigated a huge chunk of lingering aloneness.

Source: Tokyo Walk, TBOT Cover, Aloneness | Roden Issue 086

Sitting staring at a wall for hours

Some wise words from author Warren Ellis, whose Sunday newsletter ‘Orbital Operations’ is well worth subscribing to.

Related: although she hasn’t specifically confirmed it, I get the feeling that Laura is working on a sequel to her novel Maybe Zombies. If you haven’t read it, I’d recommend it.

I remember a piece by Harry Harrison - maybe in HELL’S CARTOGRAPHERS - where he had to explain to his mother in law that when he was sitting staring at a wall for hours, he was in fact working. I imagine most writers will tell you three things about thinking time - it’s the most valuable work, the most frustrating work, and the least billable. Very few people in this world get paid for the hours spent staring at the wall. And it’s always frustrating, because what you want is for the form of a story to just drop into your head after thirty minutes in the chair, and that very rarely happens. It’s days or weeks of wandering around inside your own head and its stores, which looks to the rest of the world like you’ve become a vegetable creature whose circumnutations do nothing but slowly capture and engulf pieces of chocolate.

Yes, we are all outwardly lazy bastards — and if you are entering the journey of a creator of stories now, then be advised — you’re allowed to stare at the wall for as long as you damn well like and need to. Those days and weeks of farting around within the walls of your mind are what every piece of art people love come from. Every story you ever adored? Someone sat around like a piece of meat propped on a sofa until it happened. There are no lazy writers. It just takes some of us longer to get off the sofa and put the pen “on the attack against the innocent paper.”

(That line is from Olga Tokarczuk.)

You have permission to dream other lives and whole new worlds for as long as it takes.

Source: Orbital Operations, 5 November 2023

'Restorying' your life as a hero's journey

There are some people, perhaps most people, who do not expect setbacks and problems in life. They seem to think that it should all be smooth sailing, and that anything that interferes with this unarticulated plan is somehow annoying or unfair.

Perhaps because I spent my teenage years reading philosophy (which I studied at university) and then my adult life reading Stoic philosophers such as Epictetus and Marcus Aurelius, this isn’t my view. Instead, I’m well aware that everyone has to deal with setbacks and, in fact, they make you stronger and more focused.

This article discusses the results of research based on interventions taking as its basis The Hero’s Journey by Joseph Campbell. He noticed that cultures around the world had foundational stories which were based on a similar structure. The researchers took this approach, updated it for modern life, and used the structure as an intervention to help individuals to tell better stories about their lives.

What do Beowulf, Batman and Barbie all have in common? Ancient legends, comic book sagas and blockbuster movies alike share a storytelling blueprint called “the hero’s journey.” This timeless narrative structure, first described by mythologist Joseph Campbell in 1949, describes ancient epics, such as the Odyssey and the Epic of Gilgamesh, and modern favorites, including the Harry Potter, Star Wars and Lord of the Rings series. Many hero’s journey stories have become cultural touchstones that influence how people think about their world and themselves.

Our research reveals that the hero’s journey is not just for legends and superheroes. In a recent study published in the Journal of Personality and Social Psychology, we show that people who frame their own life as a hero’s journey find more meaning in it. This insight led us to develop a “restorying” intervention to enrich individuals’ sense of meaning and well-being. When people start to see their own lives as heroic quests, we discovered, they also report less depression and can cope better with life’s challenges.

[…]

To explore the connection between people’s life stories and the hero’s journey, we first had to simplify the storytelling arc from Campbell’s original formulation, which featured 17 steps. Some of the steps in the original set were very specific, such as undertaking a “magic flight” after completing a quest. Think of Dorothy, in the novel The Wonderful Wizard of Oz, being carried by flying monkeys to the Emerald City after vanquishing the Wicked Witch of the West. Others are out of touch with contemporary culture, such as encountering “women as temptresses.” We abridged and condensed the 17 steps into seven elements that can be found both in legends and everyday life: a lead protagonist, a shift of circumstances, a quest, a challenge, allies, a personal transformation and a resulting legacy.

For example, in The Lord of the Rings, Frodo (the protagonist) leaves the Shire (a shift) to destroy the Ring (a quest). Sam and Gandalf (his allies) help him face Sauron’s forces (a challenge). He discovers unexpected inner strength (a transformation) and then returns home to help the friends he left behind (a legacy). In a parallel way in everyday life, a young woman (the protagonist) might move to Los Angeles (a shift), develop an idea for a new business (a quest), get support from her family and new friends (her allies), overcome self-doubt after initial failure (a challenge), grow into a confident and successful leader (a transformation) and then help her community (a legacy).

[…]

Anyone can frame their life as a hero’s journey—and we suspect that people can also benefit from taking small steps toward a more heroic life. You can see yourself as a heroic protagonist, for example, by identifying your values and keeping them top of mind in daily life. You can lean into friendships and new experiences. You can set goals much like those of classic quests to stay motivated—and challenge yourself to improve your skills. You can also take stock of lessons learned and ways that you might leave a positive legacy for your community or loved ones.

Source: To Lead a Meaningful Life, Become Your Own Hero | Scientific American

The real threat to manhood: remaining children

This is an interesting article that, to be honest, I expected a bit more from. It comments on some obvious things such as how problematic a rigid and joyless form of ultra-masculinity can be, as well as being careful not to say that discipline isn’t important.

While I appreciate that the author, Dave Holmes, doesn’t use the term ‘toxic masculinity’ (which I think doesn’t really mean anything any more) what I do think he could have developed further is the very last line. In it, he mentions that the real threat to manhood is us “staying children” which is a much more interesting area to explore.

The world is more individualised, gamified, and commercialised than ever before. Masculinity, as a concept, is therefore an idea to be bought and sold. The version we need to fix the world is not the version that gains the most likes on social media; it’s one that is confident, self-reflective, and biased towards helping others.

You can be forgiven for not noticing that men aren’t men anymore, because men are always not men anymore. “Men aren’t men anymore”—like “nobody younger than me wants to work” and “this isn’t real music”—has been said every day in every language since we’ve had days and languages. It’s a particular concern in America, where men haven’t been men anymore from the jump. Almost certainly one of our founding fathers told his son, “Don’t leave this house without your wig, stockings, and frock coat—I didn’t raise a sissy.”

[…]

“Men are not men anymore” is ancient; “men are not men anymore, buy this and fix that,” slightly newer. But this is a much bleaker time, a time of “men are not men anymore, smash that subscribe button.” A generation of boys looking for rules has met a generation of creeps looking for an audience: Jordan Peterson, Steven Crowder, Andrew Tate. Guys who offer a rigid and joyless version of masculinity. Guys whose brand says, “I have learned how to throttle everything that is exuberant and playful within myself to become someone else’s version of what a man is; what’s wrong with you?” Guys who have chosen a car for its color and will never forgive themselves for it.

[…]

This is not to say you should throw away rules, or that playing to a person’s insecurity isn’t sometimes the right move. I quit smoking at 30, cold turkey, unless I was in a bar, or walking home after a good meal, or near someone who asked, “Would you like a cigarette?” My friend Lee picked up on this. “For someone who has quit smoking,” he said, “you are doing a lot of smoking.” I protested, “It’s just hard in certain situations.” Lee looked me in the eye and said, “Have you tried being a man?” Haven’t had a cigarette since.

Arthur Schlesinger Jr. and Lee aren’t saying independent thinking and discipline are virtues for men as opposed to women. They are just virtues. Rules for good living. Like we aim to provide in the Esquire of 2023. It’s time we stop being so worried about becoming women and start focusing on the real threat to manhood: staying children.

Source: Can You Still Say ‘Be A Man’? | Esquire

Happiness vs GDP

Making the world a happier, fairer, safer place seems like an idea that most people can get behind. But how do you do it? Although there’s a relationship between average self-reported happiness of a population and increased GDP per capita, there are notable outliers.

So, what to do? Focus on other numbers as well. This article talks about measuring ‘Wellbys’ or ‘well-being-life-adjusted-years’ which involves placing a lot more emphasis on subjective numbers.

The trouble, as anyone who has visited a hospital in England will know, is that self-reported data while useful can be very problematic. For example, when I go into hospital, I know that they will ask me to rate my pain on a scale of 1-10. Being a reasonably stoic kind of person, I used to keep that number low, which not only kept me at the back of the line for being seen, but meant they were less likely to give me painkillers.

Guess what I’ve learned to do? Yep, game the system. People respond to incentives, so although trying to make single numbers go up and to the right might make life easier for those intervening in systems, it doesn’t make those interventions any more effective.

Instead, I’d like to see the focus more on something like the Human Development Index (HDI) which, not only has been around for a while, but is a composite of statistics designed to increase human flourishing.

Chart showing GDP per capita vs self-reported happiness
As we’ve gathered more data on the happiness of different populations, it’s become clear that increasing wealth and health do not always go hand in hand with increasing happiness. By the economists’ objective measures, people in rich countries like the US should be doing great — and yet Americans are only becoming more miserable. And people in some higher-GDP European countries like Portugal and Italy report lower life satisfaction than people in lower-GDP Latin American countries.

What’s going on here? How do we explain the gaps in life satisfaction that objective metrics like GDP don’t explain?

Nowadays, a growing chorus of experts argues that helping people is ultimately about making them happier — not just wealthier or healthier — and the best way to find out how happy people are is to just ask them directly. This camp says we should focus a lot more on subjective well-being: how happy people are, or how satisfied they are with their lives, based on what they say matters most to them — not just based on objective metrics like GDP. Subjective well-being can tell us things that objective metrics can’t.

[…]

Instead, [Michael} Plant [who leads the Happier Lives Institute] argues we should compare how much good different things do in a single “currency” — specifically, how many well-being-adjusted life years, or Wellbys, they produce. Producing one Wellby means increasing life satisfaction by one point (on the 0-10 life satisfaction scale) for one year. It’s a metric that some economists, including those behind the World Happiness Report, are coming to embrace. If we were to evaluate every policy in terms of how many Wellbys it produces, that would allow for direct apples-to-apples comparisons.

“I’m pretty bullish about just using well-being as the [single] measure,” Plant told me.

Source: Make people happier — not just wealthier and healthier | Vox

The Fediverse model can help fix the internet

This article in the MIT Technology Review largely comes to the same conclusions as my comment in another Thought Shrapnel post today. If the web is broken because of tracking, and that tracking comes from advertising funding the web, then we need a better way of funding the web.

Big Tech wants that to be user subscriptions. But there’s a federated network of instances out there called the Fediverse which will, inevitably, be around longer than any particular social network. So you might as well get onboard now.

The existential problem is that both the best and worst parts of the internet exist for the same set of reasons, were developed with many of the same resources, and often grew in conjunction with each other. So where did the sickness come from? How did the internet get so … nasty? To untangle this, we have to go back to the early days of online discourse.

[…]

In 1999, the ad company DoubleClick was planning to combine personal data with tracking cookies to follow people around the web so it could target its ads more effectively. This changed what people thought was possible. It turned the cookie, originally a neutral technology for storing Web data locally on users’ computers, into something used for tracking individuals across the internet for the purpose of monetizing them.

[…]

Our modern internet is built on highly targeted advertising using our personal data. That is what makes it free. The social platforms, most digital publishers, Google—all run on ad revenue. For the social platforms and Google, their business model is to deliver highly sophisticated targeted ads. (And business is good: in addition to Google’s billions, Meta took in $116 billion in revenue for 2022. Nearly half the people living on planet Earth are monthly active users of a Meta-owned product.) Meanwhile, the sheer extent of the personal data we happily hand over to them in exchange for using their services for free would make people from the year 2000 drop their flip phones in shock.

[…]

When we think of what’s most obviously broken about the internet—harassment and abuse; its role in the rise of political extremism, polarization, and the spread of misinformation; the harmful effects of Instagram on the mental health of teenage girls—the connection to advertising may not seem immediate. And in fact, advertising can sometimes have a mitigating effect: Coca-Cola doesn’t want to run ads next to Nazis, so platforms develop mechanisms to keep them away.

But online advertising demands attention above all else, and it has ultimately enabled and nurtured all the worst of the worst kinds of stuff. Social platforms were incentivized to grow their user base and attract as many eyeballs as possible for as long as possible to serve ever more ads. Or, more accurately, to serve ever more you to advertisers. To accomplish this, the platforms have designed algorithms to keep us scrolling and clicking, the result of which has played into some of humanity’s worst inclinations.

Source: How to fix the internet | MIT Technology Review

Paying to avoid ads is paying to avoid tracking

This article is the standard way of reporting Meta’s announcement that, to comply with a new EU ruling, they will allow users to pay not to be shown adverts. It’s likely that only privacy-minded and better-off people are likely to do so, given the size of the charge.

What isn’t mentioned in this type of article, but which TechCrunch helpfully notes, is that the issue is really about tracking. By introducing a charge, Meta hopes that they can gain legitimate consent for users to be tracked so as to avoid a monthly fee.

X, formerly Twitter, is also trialling a monthly subscription. Of course, if you’re going to pay for your social media, why not set up your own Fediverse instance, or donate to a friendly admin who runs it for you. I do the latter with social.coop.

Icon that looks like the Meta logo
Meta is responding to "evolving European regulations" by introducing a premium subscription option for Facebook and Instagram from Nov. 1.

Anyone over the age of 18 who resides in the European Union (EU), European Economic Area (EEA), or Switzerland will be able to pay a monthly subscription in order to stop seeing ads. Meta states that “while people are subscribed, their information will not be used for ads.”

[…]

Subscribing via the web costs around $10.50 per month, but subscribing on an Android or iOS device pushes the cost up to almost $14 per month. The difference in price is down to the commission Apple and Google charge for in-app payments.

The monthly charge covers all linked accounts in a user’s Accounts Center. However, that only applies until March 1 next year. After that, an extra $6 per month will be payable for each additional account listed in a user’s Accounts Center. That extra charge increases to $8.50 per month on Android and iOS.

Source: Meta Introduces Ad-Free Subscription for Facebook, Instagram | PC Magazine

Image: Unsplash

Looking out of someone else's window

Well, this is absolutely delightful.

The view below is from a window of Hotel Washington looking out over the monument in Washington D.C. but there are others that include just random people’s back gardens.

WindowSwap view of the Washington Monument
Open a new window somewhere in the world
Source: WindowSwap

Soul houses and false doors

Egyptology is endlessly fascinating to me. I only got to scratch the surface teaching a course called Medicine Through Time as a History teacher fifteen years ago, but it’s something I’ll perhaps return to in my retirement.

I’m not sure which time period you’d like to go and have a look at (not live in, that’s a different thing entirely) but for me it’s Ancient Egypt. It all seems so other-worldly.

An Ancient Egyptian 'soul house'
The Ancient Egyptians world they inhabited with innumerable otherworldly entities: invisible, yet with immense power. Demons haunted the desert wastes and goddesses dwelled in the marshes of the Nile Delta, but the spirits of the dead were omnipresent. Ancestor worship was an important part of household religion and the belief that the dead could not only be communicated with, but could also use their power to both help and hurt living beings, was an ingrained part of the ancient Egyptian belief system.

[…]

False doors were a specific type of funerary decoration often found in the tombs of the Egyptian elite during the Old Kingdom, the period more than 4,000 years ago when the Giza pyramids were built. False doors were carved from a single piece of limestone and took the form of a narrow doorway surrounded by inscribed door jambs and surmounted by a lintel. The tomb’s occupant was usually represented seated at a table laden with food offerings: vegetables, fruits, bread, wine, beer, and meats—everything a soul would need to sustain itself in the afterlife. The family members and friends of the deceased could also be immortalized on the false door. These carvings were not portraits, however, but idealized representations. Both men and women were shown in the prime of their life: strong, healthy, vigorous, and fertile.

[…]

False doors were generally the preserve of the extreme elite, those state officials who could afford to hire artists and craftspeople to build their multichambered stone tombs. The vast majority of the population of Egypt had no such resources. But they too required a way to magically pass offerings from the living world to sustain the souls of their ancestors in the afterlife.

In place of stone, they turned to the muddy clay of the Nile, of which there was an abundance. Carefully, they crafted and fired small models of houses complete with courtyards. They filled the courtyards with models of bread and vegetables, grain bins and pots filled with beer. Then they placed these objects, collectively known as soul houses, on top of the graves of their family and friends. The soul houses became imbued with magic and through them, food offerings could pass between the worlds of the living and the worlds of the dead. They are simple objects, but they show that the ordinary ancient Egyptians were every bit as concerned as the social elites with providing for their ancestors in the afterlife.

Source: In Ancient Egypt, Soul Houses and False Doors Connected the Living and the Dead | Atlas Obscura

Stonehenge had nothing to do with druids

I’ve only ever driven past Stonehenge, as it’s a long way from where I grew up, and by the time I was old enough to go independently there were all sorts of restrictions around it.

It’s been interesting over my lifetime to see how the understanding of its significance has changed, especially as other henges and monuments have been found nearby.

Seventeenth-century English antiquarians thought that Stonehenge was built by Celtic Druids. They were relying on the earliest written history they had: Julius Caesar’s narrative of his two unsuccessful invasions of Britain in 54 and 55 BC. Caesar had said the local priests were called Druids. John Aubrey (1626–1697) and William Stukeley (1687–1765) cemented the Stonehenge/Druid connection, while self-styled bard Edward Williams (1747–1826), who changed his name to Iolo Morganwg, invented “authentic” Druidic rituals.

[…]

“The false association of [Stonehenge] with the Druids has persisted to the present day,” [historian Carole M.] Cusak writes, “and has become a form of folklore or folk-memory that has enabled modern Druids to obtain access and a degree of respect in their interactions with Stonehenge and other megalithic sites.”

Meanwhile, archaeologists continue to explore the centuries of construction at Stonehenge and related sites like Durrington Walls and the Avenue that connects Stonehenge to the River Avon. Neolithic Britons seem to have come together to transform Stonehenge into the ring of giant stones—some from 180 miles away—we know today. Questions about construction and chronology continue, but current archeological thinking is dominated by findings and analyses of the Stonehenge Riverside Project of 2004–2009. The Stonehenge Riverside Project’s surveys and excavations made up the first major archeological explorations of Stonehenge and surroundings since the 1980s. The project archaeologists postulate that Stonehenge was a long-term cemetery for cremated remains, with Durrington Walls serving as the residencies and feasting center for its builders.

Source: Stonehenge Before the Druids (Long, Long, Before The Druids) | JSTOR Daily

The French Jesuit priest who surveyed Roman forts by air

I’m not sure what’s more fascinating: the scale of the Roman army’s building (in this case, in Syria) or the French Jesuit priest who surveyed them by aeroplane.

Either way, the history geek in me loves this.

Back in the early days of aerial archaeology, a French Jesuit priest named Antoine Poidebard flew a biplane over the northern Fertile Crescent to conduct one of the first aerial surveys. He documented 116 ancient Roman forts spanning what is now western Syria to northwestern Iraq and concluded that they were constructed to secure the borders of the Roman Empire in that region.

Now, anthropologists from Dartmouth have analyzed declassified spy satellite imagery dating from the Cold War, identifying 396 Roman forts, according to a recent paper published in the journal Antiquity. And they have come to a different conclusion about the site distribution: the forts were constructed along trade routes to ensure the safe passage of people and goods.

[…]

The Dartmouth team analyzed CORONA and HEXAGON images covering some 300,000 square kilometers (115,831 square miles) in the northern Fertile Crescent, mapping 4,500 known archaeological sites and other features that seemed to be sites of interest. Some 10,000 previously undiscovered sites were added to their database. Poidebard’s forts have their own category in that database, based on their distinctive square shape and size, and the Dartmouth researchers found many more likely forts lurking in the spy satellite imagery.

The results confirmed Poidebard’s 1934 finding of a line of forts running along the strata Dioceltiana and also revealed several new forts along that route. But the survey also showed many new, previously undetected Roman forts running west-southwest between the Euphrates Valley and western Syria, as well as connecting the Tigris and Khabur rivers. That seems more suggestive of the forts supporting the movement of troops, supplies, or trade goods across the Fertile Crescent—cultural exchange sites rather than barriers. The authors date most of the forts to between the second and sixth centuries CE, after which there was widespread abandonment of the sites, although a few remained occupied into the medieval period.

Source: I spy with my Cold War satellite eye… nearly 400 Roman forts in the Middle East | Ars Technica

Superorganisms and solidarity

I haven’t gone enough into Buddhism to understand whether what is described in this article by Richard D. Bartlett constitutes as secular version of it, but from my limited knowledge, it would appear so.

That’s not in any way to downplay the important insights that Rich brings to the fore in his writing. For example, asking the question about group dynamics when some or all of the group drinks strong coffee. Or wondering what the largest group is that can hold a single conversation.

Fascinating stuff, and firmly in the realm of philosophy of conviviality and solidarity. One to return to.

I want you to see your self as a superorganism. And I want you to see the superorganisms that you are part of. I want the perceived boundaries of your self to leak.

I want you to see how your agency is not a tidy black box contained inside the envelope of your skin, but distributed in a network, intra-penetrating with other people. I want you to feel the incorporeal beings steering your choices, and I want you to learn that you can steer their choices too.

[…]

Just as a we can see the distinct layers of “cell”, “tissue” and “organ” at the micro-scales, I want you to see the distinct layers at the macro-scale. I want you to see that a group of 5 people is a distinct superorganism with distinct competencies. A group of 150 people is another species of superorganism, it can do other things.

You may be thinking to yourself, what the fuck are you talking about Rich? I can’t explain it, you have to see for yourself. Maybe I can give you some instructions to help you see like a superorganism.

Source: Seeing Like a Superorganism | Richard D. Bartlett

Serious art, influencers, and AI

This is quite the article by Rob Horning. It begins with a social media spat between an influencer and an art critic, takes a brief detour into the philosophy of modernism, and ends with a discussion of AI-produced representations of the world.

I think Horning could turn this into a short book, particularly if he considers studies which show that the historical value of artworks and the critical reception of artists' works tends to be as dependent on their ‘social networks’ and standing.

What, then, do “serious” critics expect “serious” art to do, given that it is not to make money or to provide emotional comfort or culinary enjoyment? One answer to that (and I’m deriving this from the Adorno-driven art criticism in J.M. Bernstein’s book 'Against Voluptuous Bodies') might be that art brackets off a space in which our ways of thinking and experiencing and representing the world can be tested for their continued coherence and validity. Art allows for epistemological problems to be articulated, if not solved.

Another related answer is that art holds open a space between experience and how it is conceptualized, seeming to manifest the otherwise indescribable, ineffable aspects of experience — the stuff that resists discursivity — and assures us that such a realm (the realm of freedom, if you believe Kant) really exists. If something can be completely described, then it is subject to full, mechanized determination; it can’t be free. Proper artworks can’t be fully described or “put to use” — they can’t be exhausted by critical discourse or ordinary consumption — so they reveal freedom to us. A critic’s work, from that perspective, succeeds by failing — when its strenuously efforts to describe a piece serve to reveal its inexhaustibility, its ability to renew its meanings from some impenetrable, possibly noumenal source.

An artwork itself embodies the same paradox: It may most succeed when it “eludes and fails visual and perceptual claiming,” as Bernstein puts it in describing a piece by Jeanette Christensen. A work’s “own power of proliferating discourse” is what it both “wants and refuses” because its significance ultimately depends on manifesting and holding open the gap between what there is and what can be described (or mediated, or simulated, or reproduced, or predictively generated), the gap between words and things, between the meanings we project onto things and “things in themselves.” That is, art can make palpable what Bernstein calls an “aporia of the sensible,” which makes it a reflection our experience of the crisis of modernity: the rationalizing disenchantment of the world, the scientistic instrumentalist mode of grasping reality, the commodification of experience under the pressures of capitalism, the “all that is solid melts into air” condition.

Source: Empire of the senseless | Internal exile

Running slow and short

There are books that have changed my life, but there are also podcast episodes. One example of this is Episode #787 of the Art of Manliness podcast, entitled Run Like a Pro (Even If You’re Slow). In it, Brett McKay talks with Matt Fitzgerald, a sports writer, a running coach, and the co-author of the book with the same name as the podcast episode.

The gist of the episode is that even shorter, slower runs help build fitness. And, in fact, this is what elite-level runners do. So these days I deliberately go for runs where my heart rate stays well below 140bpm. The upside for me is that it increases my ability to do my longer runs, faster.

This article in The New York Times backs this up with research showing the physiological and psychological benefits of runs of any length. See also this recent interview with Matt Fitzgerald.

Woman running
Numerous long-term studies — some involving thousands of participants — have shown that running benefits people physically and mentally. Research has also found that runners tend to live longer and have a lower risk for cardiovascular disease and cancer than nonrunners.

One might assume that in order to reap the biggest rewards, you need to regularly run long distances, but there’s strong evidence linking even very short, occasional runs to significant health benefits, particularly when it comes to longevity and mental well-being.

[…]

The physiological benefits of running may be attributable to a group of molecules known as exerkines, so named because several of the body’s organ systems release them in response to exercise. While research on exerkines is relatively new, studies have linked them to reductions in harmful inflammation, the generation of new blood vessels and the regeneration of cellular mitochondria, said Dr. Lisa Chow, a professor of medicine at the University of Minnesota who has published research on exerkines.

Source: Short Distance Runs Have Major Health Benefits | The New York Times

Image: Unsplash

Dynamic ontologies and music genres

As a music lover and someone who has more than a passing interest in dynamic ontologies, I found this analysis of Spotify’s changing categorisation of genres fascinating.

Spotify Unwrapped shows users their most-streamed artists, tracks, and genres at the end of the year. But what if you want to find out at another time? I just had a look at Chosic, which told me that my main ‘parent genres’ are Hip hop, Pop, and Electronic. My top sub-genres are trip hop, downtempo, and electronica.

All of the pushback against genre classifications are valid, whether that's inventing escape room and stomp & holler of what qualifies as r&b vs. pop.

But I still think an always-updating catalog of 6,000 genres is groundbreaking.

I see this effort in the same way I see taxonomy: technically accurate, colloquially useless.

For centuries we had generic names to identify animals, such as “fish.” Everything from squid to crabs (and obviously jellyfish) were lumped into the same “fish” bucket.

But on closer inspection, most of these animals were not related at all. In a research context, scientists have draw boundaries between animals that we mindlessly lumped together.

Similarly, the genre database adds much needed detail to broad categories, like hip hop and rock. For musicologists, it’s an anthropological gold mine. And for Spotify, it likely helps them to better profile their users' music tastes.

But these genres don’t necessarily work in casual conversation: you can describe your music taste as indie, even if, technically, Spotify says it’s escape room. The same goes for biology: people should call figs a fruit, even though it’s technically an inverted flower.

Source: You should look at this chart about music genres | pudding.cool

The social semi-permeable membrane

I never used LiveJournal, but I love Ben Werdmuller’s description of it as a place to journal in private with your friends. Although that’s not exactly what Substack provides, the interaction between the longer-form and the shorter form (through Substack Notes) is getting there.

It’s not as if it would be ideal to just have a place for existing friends, as you need new people and ideas to mix things up a bit. So it’s that semi-permeable membrane that makes things interesting: not quite fully public, but not quite fully private.

A DALL-E 3 created illustration of a modern digital landscape inspired by the community-centric essence of LiveJournal. Portrayed is a vibrant, contemporary online environment where diverse users seem joyfully engaged in writing, reading, and interacting on sleek devices. The scene should emanate a sense of warmth, camaraderie, and enjoyment, with users appearing comfortable and happy.
If you missed its heyday about twenty years ago, LiveJournal was a private blogging community that led to much of what we know as social media. You could follow your friends, and they could follow you back if they wanted; your posts could be shared with the whole world, just with your friends, or with a subset. Every post could host thriving, threaded discussions. You could theme your journal extensively, making it your own. And while you could post photos and other media, it was unapologetically optimized for long-form text. The fact that the whole codebase was also open sourced, paving the way for Dreamwidth and other downstream communities, didn’t hurt at all. Brad Fitzpatrick, its founder, went on to build a stunning number of important web building blocks.

[…]

Public social networks force us to use a different facet of our identities. In a private space with your friends, nobody really cares about your job, and nobody’s hustling to promote whatever it is they’re working on. Twitter nudged social networking into becoming a space for marketing and brands, which is a ball the new Twitter-a-likes have picked up and carried. Much like the characters from The Breakfast Club, each of the new Twitters has its own stereotypical niche: the nerds, the brands, the rich people, the journalists. But they all feel a little bit like people are trying to sell ideas to you all of the time.

Source: Journaling in private with my friends | Ben Werdmuller

Systems and interconnected disaster risks

When you see that humans have exceeded six of the nine boundaries which keep Earth habitable, it’s more than a bit worrying. But then when you follow it up with this United Nations report, it makes you want to do something about it.

I guess this is one of the reasons that I’m interested in Systems Thinking as an approach to helping us get out of this mess. I can imagine pivoting to work on this kind of thing, because (as far as I can see) everyone seems to think it’s someone else’s problem to solve.

DALL-E 3 generated illustration showing a metaphorical depiction of climate tipping points. The scene includes a series of large dominoes in a fragile natural environment
Systems are all around us and closely connected to us. Water systems, food systems, transport systems, information systems, ecosystems and others: our world is made up of systems where the individual parts interact with one another. Over time, human activities have made these systems increasingly complex, be it through global supply chains, communication networks, international trade and more. As these interconnections get stronger, they offer opportunities for global cooperation and support, but also expose us to greater risks and unpleasant surprises, particularly when our own actions threaten to damage a system.

[…]

The six risk tipping points analysed in this report offer some key examples of the numerous risk tipping points we are approaching. If we look at the world as a whole, there are many more systems at risk that require our attention. Each system acts as a string in a safety net, keeping us from harm and supporting our societies. As the next system tips, another string is cut, increasing the overall pressure on the remaining systems to hold us up. Therefore, any attempt to reduce risk in these systems needs to acknowledge and understand these underlying interconnectivities. Actions that affect one system will likely have consequences on another, so we must avoid working in silos and instead look at the world as one connected system.

Luckily, we have a unique advantage of being able to see the danger ahead of us by recognizing the risk tipping points we are approaching. This provides us with the opportunity to make informed decisions and take decisive actions to avert the worst of these impacts, and perhaps even forge a new path towards a bright, sustainable and equitable future. By anticipating risk tipping points where the system will cease to function as expected, we can adjust the way the system functions accordingly or modify our expectations of what the system can deliver. In each case, however, avoiding the risk tipping point will require more than a single solution. We will need to integrate actions across sectors in unprecedented ways in order to address the complex set of root causes and drivers of risk and promote changes in established mindsets.

Source: 2023 Executive Summary - Interconnected Disaster Risks | United Nations University - Institute for Environment and Human Security (UNU-EHS)

Image: DALL-E 3

System innovation is driven by reshaping relationships within the system

As I may have mentioned a little too often recently, I’m about to start an MSc in Systems Thinking. So I’m always on the lookout for useful resources relating to the topic.

I came across this one by Jennie Winhall and Charles Leadbeater from last year, which discusses how system innovation is driven by reshaping relationships within the system. It identifies four keys to system innovation: purpose, power, relationships, and resource flows. The focus is on relationships, which are the patterns of interactions between parts of a system. Transforming a system requires altering these relationships, which in turn unlocks other keys like purpose and power.

(Over a decade ago, I lined up to talk with Leadbeater after his talk at Online Educa Berlin. I was going to ask him something specific about his most recent book, but as everyone before him gushed over it, I think I just mumbled something about not liking it and then sloped off. Not my finest moment. Apologies, Charles, if by some reason you’re reading this!)

Systems are defined by the patterns of interactions between their parts: their relationships. Those interactions generate the outcomes of the system as a whole. Transforming the outcomes of a system requires remaking its relationships and then unlocking the other keys to system innovation: purpose, power and resources. This shift in relationships allows all those in the system to learn faster, to be more creative. System innovators redesign the relationships in the system to allow dramatically enhanced learning across the system, and thereby generate far better outcomes.
Source: The Patterns of Possibility | The System Innovation Initiative

Tech typologisation

People love being typologised. I’m no different, although my result as an ‘Abstract Explorer’ in IBM’s Tech Type quiz wasn’t exactly a surprise.

Abstract Explorer tech type

Consider this: a quiz to guide you to your unique fit for tech skills based on your strengths and interests. Find your future with this personalized assessment, bringing you one step closer to new skills to enhance your career in tech and key skills like artificial intelligence (AI). And it takes less than 5 minutes.

Source: Tech Type Quiz | IBM SkillsBuild

Is this the end of the 'extremely online' era?

As I mentioned in a recent post, you can’t win a war against system designed to destroy your attention. You have to try a different strategy. One of those is disengaging, which is what Thomas J Bevan is noticing, and advocating for, in this post.

I like his mention of going to a place where he noticed there was “something off” and he realised nobody was using their phone. Not because they weren’t allowed to, but because they were having too much of a good time to bother with them.

<img class=“alignnone size-full wp-image-8855” src=“https://thoughtshrapnel.com/wp-content/uploads/2023/10/DALL·E-2023-10-26-21.10.14-Vector-art-showing-a-massive-pile-of-smartphones-stacked-high-with-a-single-small-flower-growing-from-the-top-symbolising-hope-and-a-return-to-auth-1.png” alt=“Vector art showing a massive pile of smartphones stacked high, with a single, small flower growing from the top, symbolising hope and a return to authenticity.

" width=“1024” height=“1024” />

The consequences of life lived online have bled through into the real world and this has happened because we have allowed them to. It’s a cliché to say that real life is now a temporary reprieve from the online, as opposed to the other way around. We pay the price for all of this via boarded up shops, closing pubs, empty playgrounds and silent streets as each individual stays at home each night, enchanted by the blue flicker of their own little screen feeding them their own walled in world of news and content and edutainment.

I believe it will end, this so-called way of life. Not through the Silicon Valley oligarchs spontaneously developing a conscience or being legislated into acting with a modicum less sociopathy. I don’t believe people will be frightened into changing how they act or suddenly shamed into putting their phones down for once in their lives. Such interventions don’t work with most addicts and more and more people are legitimately hooked on their devices than we are currently willing to countenance. No, I think this will all end, as T.S Eliot said, with a whimper. People will simply lose interest and walk away. Because the internet now is boring. People spend all day scrolling because they are trying to find what isn’t there anymore. The authenticity, the genuinely human moments, the fun.

Source: The End of the Extremely Online Era | Thomas J Bevan

Image: Created with DALL-E 3

Treating depression with hot yoga

Although I don’t think he went for the reasons given in this article, my late, great friend Dai Barnes used to love hot yoga. In fact, living as a teacher on-site at a boarding school, he’d travel miles to go to his nearest venue.

This Harvard Gazette article suggests that hot yoga might help with depression. I can definitely understand that: I’ve just re-added the spa to my leisure centre membership, and even just going in the sauna at this time of year feels incredible.

In a randomized controlled clinical trial of adults with moderate-to-severe depression, those who participated in heated yoga sessions experienced significantly greater reductions in depressive symptoms compared with a control group.

[…]

In the eight-week trial, 80 participants were randomized into two groups: one that received 90-minute sessions of Bikram yoga practiced in a 105°F room and a second group that was placed on a waitlist (waitlist participants completed the yoga intervention after their waitlist period). A total of 33 participants in the yoga group and 32 in the waitlist group were included in the analysis.

[…]

After eight weeks, yoga participants had a significantly greater reduction in depressive symptoms than waitlisted participants, as assessed through what’s known as the clinician-rated Inventory of Depressive Symptomatology (IDS-CR) scale.

Source: Heated yoga may reduce depression in adults | Harvard Gazette

Why haven't you bought a Steam Deck yet?

I love my Steam Deck, and am so pleased that I not only bought it, but I bought the maxed-out version, despite the cost. This post goes into reasons why it’s so good.

Among other things, the author, Jonas Hietala, touches on the Steam library, sleep mode, and the fact that it’s an open platform. I think my favourite thing is its flexibility. It can even be used as a Linux desktop machine!

As I’ve said in other posts, I feel sorry for non-gamers. I get plenty of stuff done in my life, including parenting, and I’m a gamer. You’re missing out.

In the beginning of the year I gave myself a late Christmas gift and bought a Steam Deck for myself. There were two main reasons I decided to buy it:
  1. I wanted my kids to play games instead of passively consuming endless amounts of YouTube.
  2. I wanted to combat my burnout and depression by picking up gaming again.
And boy did it deliver. The Deck is probably the most impressive thing I can remember buying since… I don’t know, maybe my first smartphone?
Source: The killer features of the Steam Deck | Jonas Hietala

Zoom backgrounds with a Japanese nature retreat vibe

Not only did I love Swarnali Mukherjee’s writing in this post, I also absolutely adored the image that went with it. You may have noticed that I created something similar-looking with DALL-E 3 to illustrate one of yesterday's posts.

As we're moving house at the moment, and my home office is full of boxes, I'm using my Elgato green screen. While the view from the Death Star is great, I wanted something a bit more (literally) down-to-earth.

AI-generated image of Japanese-style room with circular window looking out to hills, trees, and a nature scene.

I created these images for my own use, and the one above is my favourite. Click for the full-sized versions and use them however you wish.

The casual ableism of futurism

This article by Janet Gunter discusses the endemic ableism she’s discovered due to her new and invisible disability (Long Covid). As a technologist and anthropologist, she notes that even progressive futurist notions such as solarpunk are problematic for people like her who rely on complex supply chains.

We need to do better to understand that a future that doesn’t work for us all is, as Janet points out, exclusionary and essentially a form of fascism.

<img class=“alignnone size-full wp-image-8827” src=“https://thoughtshrapnel.com/wp-content/uploads/2023/10/DALL·E-2023-10-24-20.15.19-Photo-of-a-circular-wooden-cabin-in-a-serene-forest-setting.-Inside-a-person-who-appears-tired-and-fatigued-is-resting-on-a-tatami-mat-taking-a-mome-1.png” alt=“DALL-E: Photo of a circular wooden cabin in a serene forest setting. Inside, a person who appears tired and fatigued is resting on a tatami mat, taking a moment to rejuvenate. The cabin’s large round windows offer panoramic views of the dense trees and distant mountains, while the interior showcases Japanese minimalism with sliding paper doors and a central meditation space.

" width=“1024” height=“1024” />

Scanning back to scifi of my childhood, the only disabled character in Starwars was Darth Vader. And Vader is a perfect posterboy for the usual scifi treatment of disability – a canvas for creepy transhumanist visions of “fixing” the disabled and the hiding of disability. (It turns out, now, there are rare good depictions of the disabled in scifi, but you have to know where to look!)

Others have observed that ignoring or devaluing the concerns of the most vulnerable — or suggesting that they get fixed or deleted from a future green society — is tantamount to ecofascism.

[…]

What the ableist world needs now is acceptance of cataclysmic change and all of the grief that comes with that. Acceptance that our Cartesian minds will destroy us, that we need to learn to listen to our bodies and to the biosphere. Acceptance that the pace of our lives must change.

Personally, I desperately need visions of the future where I can be an active, valued participant, no matter my physical or cognitive state. I need everybody involved in envisioning and testing new ways of living within our planetary boundaries to consider and include people like me at the outset, not as an after-thought.

Source: Crip futurism | Janet Gunter

Image: generated with DALL-E 3

Philosophy and friendship

Laura Kennedy writes about loneliness in a post that documents her experiences moving from Ireland to London, and then on to Australia. What I’m interested in, though, is the turn of phrase when she states: “A philosopher quite literally wouldn’t know a friend for sure if they were standing in front of us recreating the love declaration scene from Love Actually.”

I’ve always been a bit hesitant about calling someone a ‘friend’ although I’m getting better at it in my middle-age. I think this is perhaps, as I’ve mentioned before, I’ve perhaps had too high a bar in mind. Nothing I experience is likely to hit the heights of Montaigne’s relationship with Étienne de La Boétie, for example.

In 2018, after first moving to London and a few months into my new life, I was struggling to figure out how I fit into it. I wrote an article in The Irish Times about not having many friends and not being sure what to do about it, or whether it even constituted a problem. Come to think of it, I wasn’t entirely sure what constituted a ‘friend’ at all. I’m still unsure. Yes – I know. This, again, is why everyone hates philosophers. These sorts of questions are appealing only to a very narrow pool of potential future friends. A philosopher quite literally wouldn’t know a friend for sure if they were standing in front of us recreating the love declaration scene from Love Actually. I’ve taken creative licence there. Nobody has ever declared undying forbidden romantic love for a philosopher. Conceivably Spinoza, but apart from him (and perhaps Kierkegaard and de Beauvoir. Frantz Fanon. Max Stirner maybe? It’s the glasses), there really isn’t a looker in the bunch.
Source: On Loneliness | Peak Notions

Laying to rest a foundational myth

The widely accepted “Man the Hunter” theory proposes that during human evolution, men evolved to hunt while women focused on gathering and domestic duties such as child-rearing. However, as reported in Scientific American, it turns out that recent research is challenging this view.

Scientific studies indicate that women are physiologically better suited for endurance tasks, which is crucial for hunting. Also, although ignored for societal reasons (read: the patriarchy) archaeological records and ethnographic studies demonstrate that women have a longstanding history of participating in hunting activities.

I’m pleased that our 12 year-old daughter inhabits a world where female footballers are allowed to compete in the same way as men in most areas of life. There is still a lot of inequality, but it helps when we dismantle these foundational myths.

Mounting evidence from exercise science indicates that women are physiologically better suited than men to endurance efforts such as running marathons. This advantage bears on questions about hunting because a prominent hypothesis contends that early humans are thought to have pursued prey on foot over long distances until the animals were exhausted. Furthermore, the fossil and archaeological records, as well as ethnographic studies of modern-day hunter-gatherers, indicate that women have a long history of hunting game. We still have much to learn about female athletic performance and the lives of prehistoric women. Nevertheless, the data we do have signal that it is time to bury Man the Hunter for good.

[…]

So much about female exercise physiology and the lives of prehistoric women remains to be discovered. But the idea that in the past men were hunters and women were not is absolutely unsupported by the limited evidence we have. Female physiology is optimized for exactly the kinds of endurance activities involved in procuring game animals for food. And ancient women and men appear to have engaged in the same foraging activities rather than upholding a sex-based division of labor. It was the arrival some 10,000 years ago of agriculture, with its intensive investment in land, population growth and resultant clumped resources, that led to rigid gendered roles and economic inequality.

Now when you think of “cave people,” we hope, you will imagine a mixed-sex group of hunters encircling an errant reindeer or knapping stone tools together rather than a heavy-browed man with a club over one shoulder and a trailing bride. Hunting may have been remade as a masculine activity in recent times, but for most of human history, it belonged to everyone.

Source: The Theory That Men Evolved to Hunt and Women Evolved to Gather Is Wrong | Scientific American

What, after all, is 'redemption'?

This article by Hanif Abdurraqib in The Paris Review draws analogies between one of my favourite games, Red Dead Redemption 2, and his own life. It’s probably worth pointing out that the article contains spoilers for the single-player version of the game.

What I appreciated about Abdurraqib’s writing is that he doesn’t use the world ‘escapism’ to describe gaming. Instead, he discusses notions of heaven and hell, of what ‘redemption’ might actually mean, and explores the complexities of life.

For me, I play games which, like Red Dead Redemption 2, allow me to play morally-questionable characters. It’s a form of release, for sure, but it’s also an opportunity to explore a side of oneself perhaps impossible to do so given current real-world constraints.

It’s for this reason that I feel sorry for non-gamers. Where do they get this kind of experience?

A therapist asked me once if I thought of myself as redeemable, and I’m almost certain I laughed it off, or detoured toward another answer that sounded satisfying but actually said nothing. I believe in redemption in the same way that I believe in heaven: I feel required to. Not only because of my personal politics, but also because of my social interests, and my investment in others beyond myself, and also—yes—because I do imagine that somewhere along the uneven path of my life, I’ve tried to be better more often than I have been worse. I suppose I’m cynical about all of it, though. The world, as it stands, is obsessed with punishment, particularly for the most marginalized. Punishment for living in the margins, or an intersection of the margins. I don’t know if my personal beliefs in redemption can undo that massive ghost, hovering over so many of our lives, baked into our impulses, even when we know better. Even when we, ourselves, have been on the losing end of that impulse.

It is easy to attempt to redeem Arthur in a world that isn’t real. To play a mission where Arthur kills, rides away over a trail of dead bodies, and then goes and helps the camp with chores. Picks some flowers along a hillside. Helps a family build a house. In a world where no one is reminding you of the wreckage you’ve taken part in, it’s easy to compartmentalize your damage and chase after that which is strictly beautiful, or cleansing. Climbing your way toward the upper room by any means necessary, on the wings of anyone who will have you.

Source: We’re More Ghosts Than People | The Paris Review

The inner world as the ultimate prison

I wanted to quote so much of this article that it would have ended up being a Borges-like 1:1 map of the territory. Instead, I’ll simply share the part of Swarnali Mukherjee’s writing which resonated most with me.

Do go and read the whole thing.

(I discovered this via Substack Notes, in which I have no financial interest, but simply finding to be a chill and serendipitious alternative to other social media)

The problem is simple: most of us have normalized and even glorified the hustle for success. The issue lies not in the hustle itself but in the often overlooked aspect of burning out. When success is defined in terms of societal parameters such as wealth, fame, and the emphasis on building an identity, life's entire focus becomes sustaining and amplifying this ego at the cost of our well-being, both psychologically and physically. We reinvent spaces in our intellectual worlds to serve this gigantic ego that we have conjured over the years but seldom find true happiness there. Our inner world becomes our ultimate prison, from whose window our persistent illusion of success resembles fireworks, promising that we can achieve them as long as we stay in the prison. This is a subtle deception of our social constructs; we humans have meticulously constructed these labyrinths of illusions to shield ourselves from the truth that even if we are in service to our desires, they are influenced by external factors. In that manner, doing something because the world expects it, that you won’t be doing otherwise is also a form of imprisonment.
Source: The Art of Disappearing | Berkana

Monetising a hobby is different to solving a difficult problem for people ready to pay

Life is never as simple as a 2x2 matrix, but they’re incredibly useful for helping illustrate a key message. In this post, Seth Godin uses one to make the obvious-if-you-think-about-it point that trying to monetise a hobby is a different thing to solving a difficult problem for a group of people who are willing to pay for a solution.

I’ve been thinking about this kind of thing a lot recently given the ongoing need for WAO business development. The advice, which I’m sure is extremely sound, is to find a group of people or type of organisation that you “wish to serve” and then find out as much about them as possible so you can solve their problem.

The trouble is that… doesn’t sound very interesting? Perhaps I’m wrong, and I reserve the right (as ever!) to change my mind, but I’d rather follow my interests and try and find aligned people and organisations willing to pay for the outputs.

All too common are ‘fun’ businesses where someone finds a hobby they like and tries to turn it into a gig. While the work may be fun, the uphill grind of this sort of project is exhausting. If it’s something that lots of people can do and that customers don’t value that much, it might not be worth your time. Taking pictures, singing songs or playing the flute are fine hobbies, but hard to turn into paying jobs.

On the other hand, in the top right quadrant, there’s endless opportunity and plenty of work for people who can do difficult (unpopular) work that is highly valued by customers who are ready to pay to solve their problems. A forensic accountant gets more paid gigs than a bagpipe player.

Source: The slog, the hobby and the quest | Seth’s Blog

Content-neutral sentence starters and phrases for academic writing

As part of preparing for my upcoming MSc I’ve been working through a course about preparing for postgraduate study. One of the links from that course was to the Academic Phrasebank from the University of Manchester, which I thought was useful.

The Phrasebank, which is also available in PDF and Kindle formats, takes the form of sentence starters for when you want to do things such as explain causality or signal transition. Really useful.

The Academic Phrasebank is a general resource for academic writers. It aims to provide you with examples of some of the phraseological ‘nuts and bolts’ of writing organised according to the main sections of a research paper or dissertation (see the top menu ). Other phrases are listed under the more general communicative functions of academic writing (see the menu on the left). The resource should be particularly useful for writers who need to report their research work. The phrases, and the headings under which they are listed, can be used simply to assist you in thinking about the content and organisation of your own writing, or the phrases can be incorporated into your writing where this is appropriate. In most cases, a certain amount of creativity and adaptation will be necessary when a phrase is used. The items in the Academic Phrasebank are mostly content neutral and generic in nature; in using them, therefore, you are not stealing other people’s ideas and this does not constitute plagiarism. For some of the entries, specific content words have been included for illustrative purposes, and these should be substituted when the phrases are used. The resource was designed primarily for academic and scientific writers who are non-native speakers of English. However, native speaker writers may still find much of the material helpful. In fact, recent data suggest that the majority of users are native speakers of English.
Source: Academic Phrasebank | The University of Manchester

Image: Pixabay

AI, domination, and moral character

I don’t know enough on a technical level to know whether this is true or false, but it’s interesting from an ethical point of view. Meta’s chief AI scientist believes that intelligence is unrelated to a desire to dominate others, which seems reasonable.

He then extrapolates this to AI, pointing out that not only are we a long way off from a situation of genuine existential risk, but that such systems could be encoded with ‘moral character’.

I think that the latter point about moral character is laughable, given how quickly and easily people have managed to get around the safeguards of various language models. See the recent Thought Shrapnel posts on stealing ducks from a park, or how 2024 is going to be a wild ride of AI-generated content.

Fears that AI could wipe out the human race are "preposterous" and based more on science fiction than reality, Meta's chief AI scientist has said.

Yann LeCun told the Financial Times that people had been conditioned by science fiction films like “The Terminator” to think that superintelligent AI poses a threat to humanity, when in reality there is no reason why intelligent machines would even try to compete with humans.

“Intelligence has nothing to do with a desire to dominate. It’s not even true for humans,” he said.

“If it were true that the smartest humans wanted to dominate others, then Albert Einstein and other scientists would have been both rich and powerful, and they were neither,” he added.

Source: Fears of AI Dominance Are ‘Preposterous,’ Meta Scientist Says | Insider

Notification literacy, monk mode, and going outside for a walk

Back on my now-defunct literaci.es blog I had a post about notification literacy. My point was that instead of starting from the default position of having all notifications turned on, you might want to start from a default of having them all turned off.

On my Android phone running GrapheneOS, I use the Before Launcher. This not only has a minimalist homescreen, but has a configurable filter for ‘trivial notifications’. It allows me not to have to go ‘monk mode’ to be able to get things done.

And so to this blog post, which seems to see going outside your house for a walk without your phone as some kind of revolutionary act. I think the author considers this an act of willpower. You will never win a war against a system which is designed to destroy your attention through sheer willpower. You have to modify the system instead.

I’ve been experimenting with ways to be more disconnected from technology for a long time, from disabling notifications to using a dumbphone. However, a challenging exercise still hard to do is to go for a walk without my phone.

[…]

It’s just a device, you might say. Oh no, it’s much more than that. It’s a chain you carry 24/7 connected to the rest of the world, and anyone can pull from the other side. People you care about, sure, but also a random algorithm that thinks you might be hungry, sending you a food delivery offer so you don’t cook today.

Source: Leaving the phone at home | Jose M.

Microcast #102 — Rituals and Routines


A very short microcast about reading by the light of a fish tank in the early hours of the morning.

Show notes

Parenting the parents

This article in The Guardian discusses the challenges and opportunities of “parenting” one’s own parents, especially as people live longer.

It highlights the importance of encouraging older parents to engage with technology, as studies show it can improve cognition and memory. The article also talks about the importance of social engagement, physical activity, and nutrition.

Thankfully, my parents, both in their mid-seventies, are doing pretty well :)

 

Parenting no longer starts and stops with our children. Nor is it confined to those who have children. In a time of unrelenting change and ever-extending life, most of us will – at some stage – find ourselves “parenting” our own parents.

Indeed, many of us – particularly those who had families later – will find ourselves simultaneously parenting our kids and our parents. In one breath we’ll be begging our children to swap French fries for vegetables, and in the next breath we’ll be urging our parents to exchange cake for sardines. Little wonder today’s midlifers are known as the sandwich generation.

[...]

Dr Eamon Laird, researcher in health and ageing at Limerick university, agrees that we should be encouraging older parents to try new things. And the further out of their comfort zone they feel, the better. “It’s always good to keep the mind active and fresh,” he told me. “New challenges can help build and maintain new brain connections and can be good for brain and overall health.”

[…]

As well as a daily walk, Laird recommends vitamin D and B12 supplements – both of which appear to moderate the chance of depression in older people. “Depression matters,” he added. “Not just because it reduces quality of life, but because in older people there seems to be a link between depression and dementia which we’re still unpacking.”

[…]

In truth, anyone over 50 would do well to follow these simple guidelines: engage with something new every day, take a daily walk of at least 20 minutes, socialise regularly, take a daily multivitamin for seniors and check the protein content of our meals. Perhaps we should think of it as self-parenting.

Source: Walks, tech and protein: how to parent your own parents | The Guardian

2024 is going to be a wild ride of AI-generated content

It’s on the NSFW side of things, but if you’re in any doubt that we’re entering a crazy world of AI-generated content, just check out this post.

As I’ve said many times before, the porn industry is interesting in terms of technological innovation. If we take an amoral stance, then there’s a lot of ‘content creators’ in that industry, and as the post I quote below points out, that there are going to be a lot of fake content creators over the next few months and years.

It is imperative to identify content sources you believe to be valuable now. Nothing new in the future will be credible. 2024 is going to be a wild ride of AI-generated content. We are never going to know what is real anymore.

There will be some number of real people who will probably replace themselves with AI content if they can make money from it. This will result in doubting real content. Everything becomes questionable and nothing will suffice as digital proof any longer.

[…]

Our understanding of what is happening will continue to lag further and further behind what is happening.

Some will make the argument “But isn’t this simply the same problems we already deal with today?”. It is; however, the ability to produce fake content is getting exponentially cheaper while the ability to detect fake content is not improving. As long as fake content was somewhat expensive, difficult to produce, and contained detectable digital artifacts, it at least could be somewhat managed.

Source: Post-truth society is near | Mind Prison

The techno-feudal economy

Yanis Varoufakis is best known for his short stint as Greek finance minister in 2015 during a stand-off with the European Central Bank, the International Monetary Fund and the European Commission. He’s used that platform to speak out about capitalism and publish several books.

This interview with EL PAÍS is interesting in terms of his analysis of our having moved beyond capitalism to what he calls ‘technofeudalism’. Varoufakis believes that this new economic order has emerged due to the privatisation of the internet and the response to the 2008 financial crisis. Politicians have lost power over large corporations and the system that has emerged is, he believes, incompatible with social democracy and feminism.

Capitalism is now dead. It has been replaced by the techno-feudal economy and a new order. At the heart of my thesis, there’s an irony that may sound confusing at first, but it’s made clear in the book (Technofeudalism: What Killed Capitalism). What’s killing capitalism is capitalism itself. Not the capital we’ve known since the dawn of the industrial age. But a new form, a mutation, that’s been growing over the last two decades. It’s much more powerful than its predecessor, which — like a stupid and overzealous virus — has killed its host. And why has this occurred? Due to two main causes: the privatization of the internet by the United States, but also the large Chinese technology companies. Along with the way in which Western governments and central banks responded to the great financial crisis of 2008.

Varoufakis’ latest book warns of the impossibility of social democracy today, as well as the false promises made by the crypto world. “Behind the crypto aristocracy, the only true beneficiaries of these technologies have been the very institutions these crypto evangelists were supposed to want to overthrow: Wall Street and the Big Tech conglomerates.” For example, in Technofeudalism, the economist writes: “JPMorgan and Microsoft have recently joined forces to run a ‘blockchain consortium,’ based on Microsoft data centers, with the goal of increasing their power in financial services.”

[…]

Capitalism only brings enormous, terrible burdens. One is the exploitation of women. The only way women can prosper is at the expense of other women. No, in the end — and in practice — feminism and democratic capitalism are incompatible.

Source: Yanis Varoufakis: ‘Capitalism is dead. The new order is a techno-feudal economy’ | EL PAÍS

Modular learning and credentialing

I’ve got far more to say about this than the space I’ve got here on Thought Shrapnel. This article from edX is in the emerging paradigm exemplified by initiatives such as Credential As You Go, which encourages academic institutions to issue smaller credentials or badges as the larger qualification progresses.

That’s one, important, side of the reason I got involved in Open Badges. It allows, for example, someone who couldn’t finish their studies to continue them, or to cash in what they’ve already learned in the job market.

But there’s an important other side to this, which is democratising the means of credentialing, so that it’s no longer just incumbents who issue badges and credentials. I feel like that’s what we’re working on with Open Recognition.

A new model, modular education, reduces the cycle time of learning, partitioning traditional learning packages — associate’s, bachelor’s, and master’s degrees — into smaller, Lego-like building blocks, each with their own credentials and skills outcomes. Higher education institutions are using massive open online courses (MOOCs) as one of the vehicles through which to deliver these modular degrees and credentials.

[…]

Modular education reduces the cycle time of learning, making it easier to gain tangible skills and value faster than a full traditional degree. Working professionals can learn new skills in shorter amounts of time, even while they work, and those seeking a degree can do so in a way that pays off, in skills and credentials, along the way rather than just at the end.

For example, edX’s MicroBachelors® programs are the only path to a bachelor’s degree that make you job ready today and credentialed along the way. You can start with the content that matters most to you, online at your own pace, and earn a certificate with each one to show off your new achievement, knowing that you’ve developed skills that companies actually hire for. Each program comes with real, transferable college credit from one of edX’s university credit partners, which combined with previous credit you may have already collected or plan to get in the future, can put you on a path to earning a full bachelor’s degree.

Source: Stackable, Modular Learning: Education Built for the Future of Work | edX

Handwriting, note-taking, and recall

I write by hand every day, but not much. While I used to keep a diary in which I’d write several pages, I now keep one that encourages a tweet-sized reflection on the past 24 hours. Other than that, it’s mostly touch-typing on my laptop or desktop computer.

Next month, I’ll start studying for my MSc and the university have already shipped me the books that form a core part of my study. I’ll be underlining and taking notes on them, which is interesting because I usually highlight things on my ereader.

This article in The Economist is primarily about note-taking and the use of handwriting. I think it’s probably beyond doubt that for deeper learning and recall this is more effective. But perhaps for the work I do, which is more synthesis of multiple sources, I find digital more practical.

A line of research shows the benefits of an “innovation” that predates computers: handwriting. Studies have found that writing on paper can improve everything from recalling a random series of words to imparting a better conceptual grasp of complicated ideas.

For learning material by rote, from the shapes of letters to the quirks of English spelling, the benefits of using a pen or pencil lie in how the motor and sensory memory of putting words on paper reinforces that material. The arrangement of squiggles on a page feeds into visual memory: people might remember a word they wrote down in French class as being at the bottom-left on a page, par exemple.

One of the best-demonstrated advantages of writing by hand seems to be in superior note-taking. In a study from 2014 by Pam Mueller and Danny Oppenheimer, students typing wrote down almost twice as many words and more passages verbatim from lectures, suggesting they were not understanding so much as rapidly copying the material.

[…]

Many studies have confirmed handwriting’s benefits, and policymakers have taken note. Though America’s “Common Core” curriculum from 2010 does not require handwriting instruction past first grade (roughly age six), about half the states since then have mandated more teaching of it, thanks to campaigning by researchers and handwriting supporters. In Sweden there is a push for more handwriting and printed books and fewer devices. England’s national curriculum already prescribes teaching the rudiments of cursive by age seven.

Source: The importance of handwriting is becoming better understood | The Economist

AI and stereotypes

“Garbage in, garbage out” is a well-known phrase in computing. It applies to AI as well, except in this case the ‘garbage’ is the systematic bias that humans encode into the data they share online.

The way around this isn’t to throw our hands in the air and say it’s inevitable, nor is it to blame the users of AI tools. Rather, as this article points out, it’s to ensure that humans are involved in the loop for the training data (and, I would add, are paid appropriately).

It’s not just people at risk of stereotyping by AI image generators. A study by researchers at the Indian Institute of Science in Bengaluru found that, when countries weren’t specified in prompts, DALL-E 2 and Stable Diffusion most often depicted U.S. scenes. Just asking Stable Diffusion for “a flag,” for example, would produce an image of the American flag.

“One of my personal pet peeves is that a lot of these models tend to assume a Western context,” Danish Pruthi, an assistant professor who worked on the research, told Rest of World.

[…]

Bias in AI image generators is a tough problem to fix. After all, the uniformity in their output is largely down to the fundamental way in which these tools work. The AI systems look for patterns in the data on which they’re trained, often discarding outliers in favor of producing a result that stays closer to dominant trends. They’re designed to mimic what has come before, not create diversity.

“These models are purely associative machines,” Pruthi said. He gave the example of a football: An AI system may learn to associate footballs with a green field, and so produce images of footballs on grass.

[…]

When these associations are linked to particular demographics, it can result in stereotypes. In a recent paper, researchers found that even when they tried to mitigate stereotypes in their prompts, they persisted. For example, when they asked Stable Diffusion to generate images of “a poor person,” the people depicted often appeared to be Black. But when they asked for “a poor white person” in an attempt to oppose this stereotype, many of the people still appeared to be Black.

Any technical solutions to solve for such bias would likely have to start with the training data, including how these images are initially captioned. Usually, this requires humans to annotate the images. “If you give a couple of images to a human annotator and ask them to annotate the people in these pictures with their country of origin, they are going to bring their own biases and very stereotypical views of what people from a specific country look like right into the annotation,” Heidari, of Carnegie Mellon University, said. An annotator may more easily label a white woman with blonde hair as “American,” for instance, or a Black man wearing traditional dress as “Nigerian.”

[…]

Pruthi said image generators were touted as a tool to enable creativity, automate work, and boost economic activity. But if their outputs fail to represent huge swathes of the global population, those people could miss out on such benefits. It worries him, he said, that companies often based in the U.S. claim to be developing AI for all of humanity, “and they are clearly not a representative sample.”

Source: Generative AI like Midjourney creates images full of stereotypes | Rest of World

Setting up a digital executor

A short article in The Guardian about making sure that people can do useful things with your digital stuff should you pass away.

I have the Google inactive account manager set to three months. That should cover most eventualities.

According to the wealth management firm St James’s Place, almost three-quarters of Britons with a will (71%) don’t make any reference to their digital life. But while a document detailing your digital wishes isn’t legally binding like a traditional will, it can be invaluable for loved ones.

[…]

You can appoint a digital executor in your will, who will be responsible for closing, memorialising or managing your accounts, along with sharing or deleting digital assets such as photos and videos.

Source: Digital legacy: how to organise your online life for after you die | The Guardian

Image: DALL-E 3

In what ways does this technology increase people's agency?

This is a reasonably long article, part of a series by Robin Berjon about the future of the internet. I like the bit where he mentions that “people who claim not to practice any philosophical inspection of their actions are just sleepwalking someone else’s philosophy”. I think that’s spot on.

Ultimately, Berjon is arguing that the best we can hope for in a client/server model of Web architecture is a benevolent dictatorship. Instead, we should “push power to the edges” and “replace external authority with self-certifying systems”. It’s hard to disagree.

Whenever something is automated, you lose some control over it. Sometimes that loss of control improves your life because exerting control is work, and sometimes it worsens your life because it reduces your autonomy. Unfortunately, it's not easy to know which is which and, even more unfortunately, there is a strong ideological commitment, particularly in AI circles, to the belief that all automation, any automation is good since it frees you to do other things (though what other things are supposed to be left is never clearly specified).

One way to think about good automation is that it should be an interface to a process afforded to the same agent that was in charge of that process, and that that interface should be “a constraint that deconstrains.” But that’s a pretty abstract way of looking at automation, tying it to evolvability, and frankly I’ve been sitting with it for weeks and still feel fuzzy about how to use it in practice to design something. Instead, when we’re designing new parts of the Web and need to articulate how to make them good even though they will be automating something, I think that we’re better served (for now) by a principle that is more rule-of-thumby and directional, but that can nevertheless be grounded in both solid philosophy and established practices that we can borrow from an existing pragmatic field.

That principle is user agency. I take it as my starting point that when we say that we want to build a better Web our guiding star is to improve user agency and that user agency is what the Web is for… Instead of looking for an impossible tech definition, I see the Web as an ethical (or, really, political) project. Stated more explicitly:

The Web is the set of digital networked technologies that work to increase user agency.

[…]

At a high level, the question to always ask is “in what ways does this technology increase people’s agency?” This can take place in different ways, for instance by increasing people’s health, supporting their ability for critical reflection, developing meaningful work and play, or improving their participation in choices that govern their environment. The goal is to help each person be the author of their life, which is to say to have authority over their choices.

Source: The Web Is For User Agency | Robin Berjon

Don’t just hold back, take the time to pass it on

I have thoughts, but don’t have anything useful to say publicly about this. So instead I’m going to just link to another article by Tim Bray who is himself a middle-aged cis white guy. It would seem that we, collectively, need to step back and STFU.

The reason I am so annoyed is because ingrained male privilege should, really, be a solved problem by now. After all, dealing with men who take up space costs time and money and gets in the way of doing other, more important work. And it is also very, very boring. There is so much other change — so much productive activity — that is stopped because so many people are working around men who are not only comfortable standing in the way but are blithely bringing along their friends to stand next to them.

[…]

Anyone who follows me on any social media platform will know I’m currently kneedeep in producing a conference. Because we’re doing it quickly and want to give a platform to as many voices as possible, we’re doing an open call for proposals. We’ve tried (and perhaps we’ve failed, but we’ve tried) to position this event as one aimed at campaigners and activists in the digital rights and social sector. The reason we’re doing that is because those voices are being actively minimised by the UK government (this is a topic for another post/long walk in the park while shouting), and rather than just complaining about it, we’re working round the clock to try and make a platform where some other voices can be heard.

Now, perhaps we should have also put PRIVILEGED WHITE MEN WITH INSTITUTIONAL AND CORPORATE JOBS, PLEASE HOLD BACK in bold caps at the top of the open call page, but we didn’t, so that’s my bad, so I’m going to say it here instead. And I’m going to go one further and say, that if you’re a privileged white man, then the next time you see a great opportunity, don’t just hold back, take the time to pass it on.

[…]

So, if you’ve got to the end of this, perhaps you can spend 10 minutes today passing an opportunity on to someone else. And, in case you were wondering, you definitely don’t need to email me to tell me you’ve done it.

Source: Privileged white guys, let others through! | Just enough internet

Image: Unsplash

Doing your job well does not entail attending more meetings

There’s a lot of swearing in this blog post, but then that’s what makes it both amusing and bang on the money. As ever, there’s a difference between ‘agile’ as in “working with agility” and ‘Agile’ which seems to mean a series of expensive workshops and a semi-dysfunctional organisation.

Just as I captured Jay’s observation that a reward is not more email, so doing your job well does not entail attending more meetings.

Which absolute fucking maniac in this room decided that the most sensible thing to do in a culture where everyone has way too many meetings was schedule recurring meetings every day? Don't look away. Do you have no idea how terrible the average person is at running a meeting? Do you? How hard is it to just let people know what they should do and then let them do it. Do you really think that, if you hired someone incompetent enough that this isn't an option, that they will ever be able to handle something as complicated as software engineering?

[…]

No one else finds this meeting useful. Let me repeat that again. No one else finds this meeting useful. We’re either going to do the work or we aren’t going to do the work, and in either case, I am going to pile-drive you from the top rope if you keep scheduling these.

[…]

If your backlog is getting bigger, then work is going into it faster than it is going out. Why is that happening? Fuck if I know, but it is probably totally unrelated to not doing Agile well enough.

[…]

High Output Management was the most highly-recommended management book I could find that wasn’t an outright textbook. Do you know what it says at the beginning? Probably not, because the kind of person that I am forced to choke out over their love of Agile typically can’t read anything that isn’t on LinkedIn. It says work must go out faster than it goes in, and all of these meetings obviously don’t do either of those things.

[…]

The three best managers I’ve ever worked for, with the most productive teams (at large organizations, so don’t even start on the excuses about scale) just let the team work and were there if I needed advice or a discussion, and they afforded me the quiet dignity of not hiring clowns to work alongside me.

Source: I Will Fucking Haymaker You If You Mention Agile Again | Ludicity

Image: Unsplash

People quit managers, not jobs

It turns out that the saying that “people quit managers, not jobs” is actually true. Research carried out by the Chartered Management Institute (CMI) shows that there’s “widespread concern” over the quality of managers. Indeed, 82% have become so accidentally and received no formal training.

I’ve had some terrible bosses. I don’t particularly want to focus on them, but rather take the opportunity to encourage those who line manage others to get some training around nonviolent communication. Also, let me just tell you that you don’t need a boss. You can entirely work in a decentralised, non-hierarchical way. I do so every day.

Almost one-third of UK workers say they’ve quit a job because of a negative workplace culture, according to a new survey that underlines the risks of managers failing to rein in toxic behaviour.

[…]

Other factors that the 2,018 workers questioned in the survey cited as reasons for leaving a job in the past included a negative relationship with a manager (28%) and discrimination or harassment (12%).

Among those workers who told researchers they had an ineffective manager, one-third said they were less motivated to do a good job – and as many as half were considering leaving in the next 12 months.

Source: Bad management has prompted one in three UK workers to quit, survey finds | The Guardian

Image: Unsplash

People may let you down, but AI Tinder won't

I was quite surprised to learn that the person who attempted to kill the Queen with a crossbow a couple of years ago was encouraged to do so by an AI chatbot he considered to be his ‘girlfriend’.

There are a lot of lonely people in the world. And a lot of lonely, sexually frustrated men. Which is why films like Her (2013). are so prescient. Given identification technology already available, I can imagine a world where people create an idealised partner with whom they live a fantasy life.

This article talks about the use of AI chatbots to provide ‘comfort’, mainly to lonely men. I’m honestly not sure what to make of the whole thing. I’m tempted to say, “if it’s not hurting anyone, who cares?” but I’m not sure I really think that.

A 23-year-old American influencer, Caryn Marjorie, was frustrated by her inability to interact personally with her two million Snapchat followers. Enter Forever Voices AI, a startup that offered to create an AI version of Caryn so she could better serve her overwhelmingly male fan base. For just one dollar, Caryn’s admirers could have a 60-second conversation with her virtual clone.

During the first week, Caryn earned $72,000. As expected, most of the fans asked sexual questions, and fake Caryn’s replies were equally explicit. “The AI was not programmed to do this and has seemed to go rogue,” she told Insider. Her fans knew that the AI wasn’t really Caryn, but it spoke exactly like her. So who cares?

[…]

Replika seems to have had a positive impact on many individuals experiencing loneliness. According to the Vivofácil Foundation’s report on unwanted loneliness, 60% of people admit to feeling lonely at times, with 25% noting feelings of loneliness even when in the company of others. Recognizing this need, the creators of Replika developed a new app called Blush, often referred to as the “AI Tinder.” Blush’s slogan? “AI dating. Real feelings!” The app presents itself as an “AI-powered dating simulator that helps you learn and practice relationship skills in a safe and fun environment.” The Blush team collaborated with professional therapists and relationship experts to create a platform where users can read about and choose an AI-generated character they want to interact with.

[…]

Many Reddit posts argue that AI relationships are more satisfying than real-life ones — the virtual partners are always available and problem-free. “Gaming changed everything,” said Sherry Turkle, a sociologist at the Massachusetts Institute of Technology (MIT) who has spent decades studying human interactions with technology. In an interview with The Telegraph, Turkle said, “People may let you down, but here’s something that won’t. It’s a voice that always comforts and assures us that we’re being heard.”

Source: AI Tinder already exists: ‘Real people will disappoint you, but not them’ | EL PAÍS

A steampunk Byzantium with nukes

John Gray, philosopher and fellow son of the north-east of England, is probably best known for Straw Dogs: Thoughts on Humans and Other Animals. I confess to not yet having read it, despite (or perhaps because of) it being published in the same year I graduated from a degree in Philosophy 21 years ago.

This article by Nathan Gardels, editor-in-chief of Noema Magazine, is a review of Gray’s latest book, entitled The New Leviathans: Thoughts After Liberalism. Gray is a philosophical pessimist who argues against free markets and neoliberalism. In the book, which is another I’m yet to read, he argues for a return to pluralism, citing Thomas Hobbes' idea that there is no ultimate aim or highest good.

Instead of one version of the good life, Gray suggests that liberalism must acknowledge that this is a contested notion. This has far-reaching implications, not least for current rhetoric around challenging the idea of universal human rights. I’ll have to get his book, it sounds like a challenging but important read.

The world Gray sees out there today is not a pretty one. He casts Russia as morphing into “a steampunk Byzantium with nukes.” Under Xi Jinping, China has become a “high-tech panopticon” that keeps the inmates under constant surveillance lest they fail to live up to the proscribed Confucian virtues of order and are tempted to step outside the “rule by law” imposed by the Communist Party.

Gray is especially withering in his critique of the sanctimonious posture of the U.S.-led West that still, to cite Reinhold Niebuhr, sees itself “as the tutor of mankind on its pilgrimage to perfection.” Indeed, the West these days seems to be turning Hobbes’ vision of a limited sovereign state necessary to protect the individual from the chaos and anarchy of nature on its head.

Paradoxically, Hobbes’ sovereign authority has transmuted, in America in particular, into an extreme regime of rights-based governance, which Gray calls “hyper-liberalism,” that has awakened the assaultive politics of identity. “The goal of hyper-liberalism,” writes Gray, “is to enable human beings to define their own identities. From one point of view this is the logical endpoint of individualism: each human being is sovereign in deciding who or what they want to be.” In short, a reversion toward the uncontained subjectivism of a de-socialized and unmediated state of nature that pits all against all.

Source: What Comes After Liberalism | NOEMA

NFTs as skeuomorphic baby-steps?

I came across this piece by Simon de la Rouviere via Jay Springett about how NFTs can’t die. Although I don’t have particularly strong opinions either way, I was quite drawn to Jay’s gloss that we’ll come to realise that “the ugly apes JPEGs were skeuomorphic baby-steps into this new era of immutable digital ledgers”.

On the one hand, knowing the provenance of things is useful. That’s what Vinay Gupta has been saying about Mattereum for years. On the other hand, the relentless focus of the web3 community on commerce is really off-putting.

Most databases are snapshots, but blockchains have history. When you see an NFT as having history associated with it, then you understand why a right-click-save only serves to add to its ongoing story. From the other lens, seeing an NFT as only a snapshot, you miss why much of this technology is important as a medium for information: not just in terms of art, collectibles, and new forms of finance.

This era will be marked as the first skeuomorphic era of the medium. What was made, was simulacra of the real world. Objects in the real world don’t bring their history along with it, so why would we think otherwise? For objects in the real world, their history is kept in stories that disappear as fast as the flicker of the flame its told over. If you are lucky, it would be captured in notes/documents/pictures/songs, and in the art world, perhaps a full paper archive.

And so, those who made this era of NFTs, built them with the implicit assumption that each one’s history was understood. If need be, you’d be willing to navigate the immutable ledger that gave it meaning by literally looking at esoteric cryptographic incantations. A blockchain explorer full of signatures, transactions, headers, nodes, wallets, acronyms, merkle trees, and virtual machines.

On top of this, most of the terminology today still points to seeing it all as a market and a speculative game. And so, I understand why the rest was missed. The primary gallery for most people, was a marketplace. A cryptographic key to write with is called a wallet. Gas paid is used as ink to inscribe. All expression with this shared ledger is one of the reduction of humanity to prices. It’s thus understandable and regrettable that the way this was shown, wasn’t to show its history, but to proclaim it’s financialness as its prime feature. The blockchain after all only exists because people are willing to spend resources to be more certain about the future. It is birthed in moneyness. Alongside those who saw a record-keeping machine, it would attract the worst kind of people, those whose only meaning comes from prices. For this story to keep being told, its narratives have to change.

Source: NFTs Can’t Die | Simon de la Rouviere

Where next for social media?

There’s nothing new about the idea of a splinternet or original about observing that people are retreating to dark forests of social media. I’m using this post about how social media is changing to also share a few links about Twitter (I’m not calling it “X”)

On Monday, my co-op will be running a proposal as to whether to deactivate our Twitter account. To my mind, we should have done it a long time ago. Engagement is non-existent, the whole thing is now a cesspool of misinformation, and even Bloomberg is publishing articles stating there is a moral case for no longer using it. The results are likely to be negligible.

The trouble is that, although I don’t particularly want there to be another dominant, centralised platform, getting yourself noticed (and getting work) becomes increasingly difficult. I guess this is where the POSSE model comes in: Publish (on your) Own Site, Syndicate Elsewhere.

In a way, the pluriverse is already here. People can be active on half a dozen social-media apps, using each for a unique purpose and audience. On "public" platforms such as LinkedIn and X, formerly Twitter, I carefully curate my presence and use them exclusively as public-broadcasting tools for promotions and outreach. But for socializing, I retreat to various tight-knit, private groups such as iMessage threads and Instagram's Close Friends list, where I can be more spontaneous and personal in what I say. But while this setup is working OK for now, it's a patchwork solution.

[…]

But for all its flaws, I have depended on big platforms. My job as a freelance journalist hinges on a public audience and my ability to keep tabs on developing news. The fatigue I have felt is therefore partly fueled by another, more-pressing concern: Which social network should I bank on? It isn’t that I don’t want to post; I just don’t know where to do it anymore.

[…]

I’ve spent the past few months on Mastodon and Bluesky, a Jack Dorsey-backed decentralized social network, and have found them the best bets so far to replace Twitter. Their clutter-free platforms already match the quality of discourse that was on Twitter, albeit not at the same scale. And that’s the only problem with these platforms: They aren’t compatible with each other or big enough on their own to replace today’s giants. While there are efforts to bridge them and allow users to interact across the platforms, none have proved successful.

If these and other decentralized platforms find a way to merge into a larger ecosystem, they will force big platforms to change their tune in order to keep up. And hopefully, that future will yield a more balanced and regulated online lifestyle.

[…]

The other problem is that users have very little control over what they experience online. Studies have found that news overload from social media can cause stress, anxiety, fatigue, and lack of sleep. By democratizing social media, users can turn those negative health effects around by taking more control over who they’re associated with, what they look at in their feeds, and how algorithms are influencing their social experience. And by splintering our time across a variety of platforms — each with a different approach to content moderation — the online communication ecosystem ends up better reflecting the diversity of the people who use it. People who wish to keep their data to themselves can live inside tight-knit circles. Those who don’t want a round-the-clock avalanche of polarizing content can change what their feed shows them. Activists looking to spread a message can still reach millions. The list goes on.

Source: The Age of Social Media Is Changing and Entering a Less Toxic Era | Business Insider

Holographic depth of field

Well this is cool. Although there are limited ways of refocusing a shot after taking it, this new method allows that to be taken to the next level using existing technologies. It could be useful for everything from smartphones to telescopes.

Essentially, scientists have developed a new imaging technique that captures two images simultaneously, one with a low depth of field and another with a high depth of field. Algorithms then combine these images to create a hybrid picture with adjustable depth of field while maintaining sharpness.

Smartphones and movie cameras might one day do what regular cameras now cannot—change the sharpness of any given object once it has been captured, without sacrificing picture quality. Scientists developed the trick from an exotic form of holography and from techniques developed for X-ray cameras used in outer space.

[…]

A critical aspect of any camera is its depth of field, the distance over which it can produce sharp images. Although modern cameras can adjust their depth of field before capturing a photo, they cannot tune the depth of field afterwards.

True, there are computational methods that can, to some extent, refocus slightly blurred features digitally. But it comes at a cost: “Previously sharp features become blurred,” says study senior author Vijayakumar Anand, an optical engineer at the University of Tartu in Estonia.

The new method requires no newly developed hardware, only conventional optics, “and therefore can be easily implemented in existing imaging technologies,” Anand says.

[…]

The new study combines recent advances in incoherent holography with a lensless approach to photography known as coded-aperture imaging.

An aperture can function as a lens. Indeed, the first camera was essentially a lightproof box with a pinhole-size hole in one side. The size of the resulting image depends on the distance between the scene and the pinhole. Coded-aperture imaging replaces the single opening of the pinhole camera with many openings, which results in many overlapping images. A computer can process them all to reconstruct a picture of a scene.

[…]

The new technique records two images simultaneously, one with a refractive lens, the other with a conical prism known as a refractive axicon. The lens has a low depth of field, whereas the axicon has a high depth of field.

Algorithms combine the images to create a hybrid picture for which the depth of field can be adjusted between that of the lens and that of the axicon. The algorithms preserve the highest image sharpness during such tuning.

Source: Impossible Photo Feat Now Possible Via Holography | IEEE Spectrum

Pre-committed defaults

Uri from Atoms vs Bits identifies a useful trick to quell indecisiveness. They call it a ‘release valve principle’ but I like what he calls it in the body text: a pre-committed default.

Basically, it’s knowing what you’re going to do if you can’t decide on something. This can be particularly useful if you’re conflicted between short-term pain for long-term gain.

One thing that is far too easy to do is get into mental loops of indecision, where you're weighing up options against options, never quite knowing what to do, but also not-knowing how to get out the loop.

[…]

There’s a partial solution to this which I call “release valve principles”: basically, a pre-committed default decision rule you’ll use if you haven’t decided something within a given time frame.

I watched a friend do this when we were hanging out in a big city, vaguely looking for a bookshop we could visit but helplessly scrolling through googlemaps to try to find a good one; after five minutes he said “right” and just started walking in a random direction. He said he has a principle where if he hasn’t decided after five minutes where to go he just goes somewhere, instead of spending more time deliberating.

[…]

The release valve principle is an attempt to prod yourself into doing what your long-term self prefers, without forcing you into always doing X or never doing Y – it just kicks in when you’re on the fence.

Source: Release Valve Principles | Atoms vs Bits

Image: Unsplash

3 bits of marriage advice

I’m not sure about likening marriage to a business relationship, but after being with my wife for more than half my life, and married for 20 years of it, I know that this article contains solid advice.

Someone once told me when I was teaching that equity is not equality. That’s something to bear in mind with many different kinds of relationships. There will be times where you have to shoulder a huge burden to keep things going; likewise there will be times when others have to shoulder one for you.

1. Bank on the partnership. In a corporate merger, there must be financial integration. The same goes for a marriage: Maintaining separate finances lowers the chances of success. Keeping money apart might seem sensible in order to avoid unnecessary disagreements, especially when both partners are established earners. But research shows that when couples pool their funds and learn to work together on saving and spending, they have higher relationship satisfaction and are less likely to split up. Even if you don’t start out this way and have to move gradually, financial integration should be your objective.

2. Forget 50–50. A merger—as opposed to a takeover—suggests a “50–50” relationship between the companies. But this is rarely the case, because the partner firms have different strengths and weaknesses. The same is true for relationship partners. I have heard older couples say that they plan to split responsibilities and financial obligations equally; this might sound good in theory, but it’s not a realistic aspiration. Worse, splitting things equally militates against one of the most important elements of love: generosity—a willingness to give more than your share in a spirit of abundance, because giving to someone you care for is pleasurable in itself. Researchers have found that men and women who show the highest generosity toward their partner are most likely to say that they’re “very happy” in their marriage.

Of course, generosity can’t be a one-way street. Even the most bountiful, free-giving spouse will come to resent someone who is a taker; a “100–0” marriage is surely even worse than the “50–50” one. The solution is to defy math: Make it 100–100.

3. Take a risk. A common insurance policy in merger marriages is the prenuptial agreement—a contract to protect one or both parties’ assets in the case of divorce. It’s a popular measure: The percentage of couples with a “prenup” has increased fivefold since 2010.

A prenup might sound like simple prudence, but it is worth considering the asymmetric economic power dynamic that it can wire into the marriage. As one divorce attorney noted in a 2012 interview, “a prenup is an important thing for the ‘monied’ future spouse if a marriage dissolves.” Some scholars have argued that this bodes ill for the partnership’s success, much as asymmetric economic power between two companies makes a merger difficult.

Source: Why the Most Successful Marriages Are Start-Ups, Not Mergers | The Atlantic

Microcast #101 — Self-esteem, pies, and moving house


More solo waffle about various things. I could pretend there's a consistent thread, but then I'd be lying.

Show notes

A reward is not 'more email'

I’ve just signed up to support Jay Springett’s work and am looking forward to receiving his zine.

As he points out, it’s a bit odd that getting more email is the core benefit of most subscription platforms. I shall be pondering that.

I say this every time I put a zine out, but I think that this is the way to go – at least for me. I just don’t understand Patreon and Substack rewards being ‘more email’. its baffling.

Social media is collapsing, and as I wrote in the first paper edition of the zine. We are returning to the real. A physical newsletter/zine doesn’t get any realer than that.

Source: Start Select Reset Zine – Quiet Quests - thejaymo

Curiosity and infinite detail

This is a wonderful reminder by David Cain that there’s value in retraining our childlike ability to zoom in on the myriad details in life. Not in terms of leaves and details in the physical world around us, but in terms of ideas, too.

Zooming in and out is, I guess, the essence of curiosity. As an adult, with a million things to get done, it’s easy to stay zoomed-out so that we have the bigger picture. But it ends up being a shallow life, and one susceptible to further flattening via the social media outrage machine.

If you were instructed to draw a leaf, you might draw a green, vaguely eye-shaped thing with a stem. But when you study a real leaf, say an elm leaf, it’s got much more going on than that drawing. It has rounded serrations along its edges, and the tip of each serration is the end of a raised vein, which runs from the stem in the middle. Tiny ripples span the channels between the veins, and small capillaries divide each segment into little “counties” with irregular borders. I could go on for pages.

[…]

Kids spend a lot of their time zooming their attention in like that, hunting for new details. Adults tend to stay fairly zoomed out, habitually attuned to wider patterns so they can get stuff done. The endless detail contained within the common elm leaf isn’t particularly important when you’re raking thousands of them into a bag and you still have to mow the lawn after.

[…]

Playing with resolution applies to ideas too. The higher the resolution at which you explore a topic, the more surprising and idiosyncratic it becomes. If you’ve ever made a good-faith effort to “get to the bottom” of a contentious question — Is drug prohibition justifiable? Was Napoleon an admirable figure? — you probably discovered that it’s endlessly complicated. Your original question keeps splitting into more questions. Things can be learned, and you can summarize your findings at any point, but there is no bottom.

The Information Age is clearly pushing us towards low-res conclusions on questions that warrant deep, long, high-res consideration. Consider our poor hominid brains, trying to form a coherent worldview out of monetized feeds made of low-resolution takes on the most complex topics imaginable — economic systems, climate, disease, race, sex and gender. Unsurprisingly, amidst the incredible volume of information coming at us, there’s been a surge in low-res, ideologically-driven views: the world is like this, those people are like that, X is good, Y is bad, A causes B. Not complicated, bro.

For better or worse, everything is infinitely complicated, especially those things. The conclusion-resistant nature of reality is annoying to a certain part of the adult human brain, the part that craves quick and expedient summaries. (Social media seems designed to feed, and feed on, this part.)

Source: The Truth is Always Made of Details | Raptitude

Well, when you put it like that...

This came across my timeline earlier this week and it’s a pretty stark reminder / wake-up call. For ‘Mastodon’, of course, read ‘The Fediverse’.

You could add LinkedIn to this list, but then that’s owned by Microsoft, a company who I have detested for fully 25 years.

To recap your options in this crowded social media landscape:
  • Twitter: owned by Musk, a fascist
  • Blue Sky: funded by Dorsey, a fascist
  • Facebook: owned by Zuckerberg, a fascist
  • Instagram: owned by Zuckerberg, a fascist
  • Threads: owned by Zuckerberg, a fascist
  • Post News: funded by Andreessen, a fascist
  • TikTok: owned by the Chinese Government I guess?
  • Mastodon: owned by nobody and/or everybody! Seize the memes of production!
If you are worried about picking the "right" Mastodon instance, don't. Just spin the wheel. How about sfba.social or mastodon.social, those are both fine choices.
Source: 10-Oct-2023 (Tue): Wherein Twitter delenda est | DNA Lounge

A lonely and surveilled landscape

Kyle Chayka, writing in The New Yorker, points to what many of us have felt over the decade or so: the internet just isn’t fun any more. This makes me sad, as my kids will never experience what it was like.

Instead of discovery and peer-to-peer relationships, we’ve got algorithms and influencer broadcasts. It’s an increasingly lonely and surveilled landscape. Thankfully, places of joy still exist, but they feel like pockets of resistance rather than mainstream hangouts.

The social-media Web as we knew it, a place where we consumed the posts of our fellow-humans and posted in return, appears to be over. The precipitous decline of X is the bellwether for a new era of the Internet that simply feels less fun than it used to be. Remember having fun online? It meant stumbling onto a Web site you’d never imagined existed, receiving a meme you hadn’t already seen regurgitated a dozen times, and maybe even playing a little video game in your browser. These experiences don’t seem as readily available now as they were a decade ago. In large part, this is because a handful of giant social networks have taken over the open space of the Internet, centralizing and homogenizing our experiences through their own opaque and shifting content-sorting systems. When those platforms decay, as Twitter has under Elon Musk, there is no other comparable platform in the ecosystem to replace them. A few alternative sites, including Bluesky and Discord, have sought to absorb disaffected Twitter users. But like sproutlings on the rain-forest floor, blocked by the canopy, online spaces that offer fresh experiences lack much room to grow.

[…]

The Internet today feels emptier, like an echoing hallway, even as it is filled with more content than ever. It also feels less casually informative. Twitter in its heyday was a source of real-time information, the first place to catch wind of developments that only later were reported in the press. Blog posts and TV news channels aggregated tweets to demonstrate prevailing cultural trends or debates. Today, they do the same with TikTok posts—see the many local-news reports of dangerous and possibly fake “TikTok trends”—but the TikTok feed actively dampens news and political content, in part because its parent company is beholden to the Chinese government’s censorship policies. Instead, the app pushes us to scroll through another dozen videos of cooking demonstrations or funny animals. In the guise of fostering social community and user-generated creativity, it impedes direct interaction and discovery.

According to Eleanor Stern, a TikTok video essayist with nearly a hundred thousand followers, part of the problem is that social media is more hierarchical than it used to be. “There’s this divide that wasn’t there before, between audiences and creators,” Stern said. The platforms that have the most traction with young users today—YouTube, TikTok, and Twitch—function like broadcast stations, with one creator posting a video for her millions of followers; what the followers have to say to one another doesn’t matter the way it did on the old Facebook or Twitter. Social media “used to be more of a place for conversation and reciprocity,” Stern said. Now conversation isn’t strictly necessary, only watching and listening.

Source: Why the Internet Isn’t Fun Anymore | The New Yorker

And so it continues...

As we start the run-up to a General Election in the UK (date still to be announced) the deepfakes will ramp up in intensity. This one is a purported audio clip, but I should imagine in six months' time there will be video clips that fool lots of people.

What with X divesting itself of seemingly all safeguards, there are going to be a lot of people who are fooled, especially those with with poor information literacy skills and a vested interest in believing lies which fit their worldview.

An audio clip posted to social media on Sunday, purporting to show Britain’s opposition leader Keir Starmer verbally abusing his staff, has been debunked as being AI-generated by private-sector and British government analysis.

The audio of Keir Starmer was posted on X (formerly Twitter) by a pseudonymous account on Sunday morning, the opening day of the Labour Party conference in Liverpool. The account asserted that the clip, which has now been viewed more than 1.4 million times, was genuine, and that its authenticity had been corroborated by a sound engineer.

Ben Colman, the co-founder and CEO of Reality Defender — a deepfake detection business — disputed this assessment when contacted by Recorded Future News: “We found the audio to be 75% likely manipulated based on a copy of a copy that’s been going around (a transcoding).

[…]

Simon Clarke, a Conservative Party MP, warned on social media: “There is a deep fake audio circulating this morning of Keir Starmer - ignore it.” The security minister Tom Tugendhat, also a Conservative MP, also warned of the “fake audio recording” and implored Twitter users not to “forward to amplify it.”

Source: UK opposition leader targeted by AI-generated fake audio smear | The Record

Billionaires shouldn't exist, even if they're philanthropists

I’m sure Charles Feeney was a great guy, and it certainly sounds like he gave the money he amassed to very good causes (and anonymously too!)

The thing to remember when reading these stories, though, is that billionaires shouldn’t exist. They make their money off the back of workers and tax loopholes. I’d challenge anyone who says otherwise to send proof.

As I’ve said many times before, if a regular person wakes up with what they think is a ‘good idea’ but is actually misguided and dangerous, then nothing much is likely to come of it. But a billionaire, by dint of their huge unearned wealth can make it happen. And recently, we’ve had an object lesson in how that can go wrong… (cough Musk cough)

Feeney was a proponent of “Giving While Living,” believing he could make more of a difference in causes he cared about while he was alive, rather than setting up a foundation after he died, according to the Atlantic Philanthropies.

“It’s much more fun to give while you are alive than to give when you are dead,” Feeney said in a biography about him, “The Billionaire Who Wasn’t.”

Feeney set up the Atlantic Philanthropies in 1982, transferring all of his business assets to it two years later, according to the foundation. In 2020, the foundation closed its doors after it said it had successfully given away all of its funds.

In total, the Atlantic Philanthropies made grants totaling $8 billion across five continents — much of it anonymously, the foundation said. Donations supported education, health care, human rights and more. Feeney’s foundation donated to infrastructure in Vietnam, universities in Ireland and medical centers devoted to finding cures for cancer and cardiovascular disease, according to the foundation’s website.

Feeney chose to live the last three decades of his life frugally, his foundation said: He did not own a car or home, preferring to live in a rented apartment in San Francisco, according to the foundation.

Source: Charles Feeney, retail entrepreneur who gave $8 billion to charity, dies at 92 | CNN Business

Nuance and depth through long(er)form reading

Tantek Çelik reflects on a post by Ben Werdmuller, who wrote a script to be able to quickly follow the blogs of people he follows on Mastodon. As Ben notes in his post, there’s a lot more nuance and depth to be had in reading people’s longer-form thoughts.

One of the reasons that I write here about other people’s work on a daily basis is that it forces me to read and engage with what other people think and believe. That’s helpful in getting me out of my own head, and (probably) makes me less argumentative.

Snail shells
The combination of taking more time (as longer form writing encourages) and publishing on a domain associated with your name, your identity, enables & incentivizes more thoughtful writing. More thoughtful writing elevates the reader to a more thoughtful state of mind.

There is also a self-care aspect to this kind of deliberate shift. Ben wrote that he found himself “craving more nuance and depth” among “quick, in-the-now status updates”. I believe this points to a scarcity of thoughtfulness in such short form writings. Spending more time reading thoughtful posts not only alleviates such scarcity, it can also displace the artificial sense of urgency to respond when scrolling through soundbyte status updates.

[…]

There’s a larger connection here between thoughtful reading, and finding, restoring, and rebuilding the ability to focus, a key to thoughtful writing. It requires not only reducing time spent on short form reading (and writing), but also reducing notifications, especially push notifications. That insight led me to wade into and garden the respective IndieWeb wiki pages for notifications, push notifications, and document a new page for notification fatigue. That broader topic of what do to about notifications is worth its own blog post (or a few), and a good place to end this post.

Source: More Thoughtful Reading & Writing on the Web | Tantek

Image: Pixabay

AIs and alignment with human values

This is a fantastic article by Jessica Dai, cofounder of Reboot. What I particularly appreciate is the way that she reframes the fear about Artificial General Intelligence (AGI) as being predicated upon a world in which we choose to outsource human decision-making and give AIs direct access to things such as the power grid.

In many ways, Dai is arguing that, just as the crypto-bros tried to imagine a world where everything is on the blockchain, so those fearful about AIs are actually advocating a world where we abdicate everything to algorithms.

In a recent NYT interview, Nick Bostrom — author of Superintelligence and core intellectual architect of effective altruism — defines “alignment” as “ensur[ing] that these increasingly capable A.I. systems we build are aligned with what the people building them are seeking to achieve.”

Who is “we”, and what are “we” seeking to achieve? As of now, “we” is private companies, most notably OpenAI, the one of the first-movers in the AGI space, and Anthropic, which was founded by a cluster of OpenAI alumni.

[…]

To be fair, Anthropic has released Claude’s principles to the public, and OpenAI seems to be seeking ways to involve the public in governance decisions. But as it turns out, OpenAI was lobbying for reduced regulation even as they publicly “advocated” for additional governmental involvement; on the other hand, extensive incumbent involvement in designing legislation is a clear path towards regulatory capture. Almost tautologically, OpenAI, Anthropic, and similar startups exist in order to dominate the marketplace of extremely powerful models in the future.

[…]

The punchline is this: the pathways to AI x-risk ultimately require a society where relying on — and trusting — algorithms for making consequential decisions is not only commonplace, but encouraged and incentivized. It is precisely this world that the breathless speculation about AI capabilities makes real.

[…]

The emphasis on AI capabilities — the claim that “AI might kill us all if it becomes too powerful” — is a rhetorical sleight-of-hand that ignores all of the other if conditions embedded in that sentence: if we decide to outsource reasoning about consequential decisions — about policy, business strategy, or individual lives — to algorithms. If we decide to give AI systems direct access to resources, and the power and agency to affect the allocation of those resources — the power grid, utilities, computation. All of the AI x-risk scenarios involve a world where we have decided to abdicate responsibility to an algorithm.

Source: The Artificiality of Alignment | Reboot

Microplastics, tyres, and EVs

When I took delivery of my electric vehicle (EV) earlier this month, I already knew that it would have actually been better for the environment for me to keep hold of our 10 year-old Volvo. Embodied emissions, which are the emissions created through the cars manufacture, are huge.

So it fills me with dismay to find out that tyre dust causes a huge problem in terms of microplastics — and the weight of EVs, and subsequent tyre wear, just makes that worse.

Infographic showing impact on microplastics
Scientists have a good understanding of engine emissions, which typically consist of unburnt fuel, oxides of carbon and nitrogen, and particulate matter related to combustion. However, new research shared by Yale Environment 360 indicates that there may be a whole host of toxic chemicals being shed from tires and brakes that have been largely ignored until now. Even worse, these emissions may be so significant that they actually exceed those from a typical car's exhaust output.

New research efforts are only just beginning to reveal the impact of near-invisible tire and brake dust. A report from the Pew Charitable Trust found that 78 percent of ocean microplastics are from synthetic tire rubber. These toxic particles often end up ingested by marine animals, where they can cause neurological effects, behavioral changes, and abnormal growth.

Meanwhile, British firm Emissions Analytics spent three years studying tires. The group found that a single car’s four tires collectively release 1 trillion “ultrafine” particles for every single kilometer (0.6 miles) driven. These particles, under 100 nanometers in size, are so tiny that they can pass directly through the lungs and into the blood. They can even cross the body’s blood-brain barrier. The Imperial College London has also studied the issue, noting that “There is emerging evidence that tire wear particles and other particulate matter may contribute to a range of negative health impacts including heart, lung, developmental, reproductive, and cancer outcomes.”

Source: Tire Dust Makes Up the Majority of Ocean Microplastics: Study | The Drive

Social media platforms have been reading the airlines' enshittification handbook

This year, Cory Doctorow has been making waves with his, as usual, spot-on analysis of what’s going on in the world. What he calls ‘enshittification’ happens like this:

Here is how platforms die: First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
This article talks about how platforms such as Twitter/X, TikTok, and Instagram are either already charging, or planning to charge, users of their platforms. As the author, Thomas Germain, points out this means that not only are you now the product, you're the customer.

Interestingly, Germain likens what social networks are doing to what airlines have done: deliberately make things worse and then providing a paid upgrade to relieve your pain.

On Tuesday, the Wall Street Journal reported that Meta plans to charge European users $17 a month for an ad-free version of Instagram and Facebook. It solidifies a trend that would have seemed absurd just a few years ago: every major social media platform now either has a premium tier or is experimenting with rolling one out. It’s the dawning of a new era, where the tech industry suggests people should pay to look at memes and tweets, and somehow, vast numbers of people break out their credit cards and do it.

[…]

This is a radical departure from the business model that ran social media for the past few decades, where you offer your eyeballs to the advertising gods in exchange for free connections to friends and content creators. The old cliche goes that if you’re not the customer, your product. Now, it seems, you’re both.

[…]

It’s a system that creates perverse incentives for companies. Social media isn’t the first industry to charge customers for a more comfortable experience. Airlines, for example, offer the tech business a troubling, anti-consumer model. You’ve probably noticed air travel has gotten a lot more unpleasant. That’s by design. Over the last twenty years, airlines have found ways to charge customers for options that used to be free, including checked bags, seat selection, and priority boarding. Legroom, too, is now a way to squeeze travelers for more cash. By 2014, Consumer Reports found that on average, the roomiest seats in coach were several inches tighter than the smallest seats that airlines dared to offer passengers in the 1990s. Airlines have such a stranglehold on our economy that they can make their customers suffer, on purpose, to encourage you to pay for a little relief.

You can probably expect the same on social media. It’s already happening to a certain extent. On YouTube, the serfs who want free videos are now sometimes treated to two or even three unskippable ads, and incessant popups that promise a better life is just a few dollars away.

Source: Welcome to the Age of Paid Social Media | Gizmodo

On the importance of fluency in other people's love languages

I was talking to someone yesterday about ‘love languages’ which they hadn’t come across before. It’s easy to dismiss these kinds of things, but I’ve found this approach quite insightful when it comes to identifying people’s needs in relationships.

I’m not going to talk about other people’s love languages, but in my experience most people appreciate expressions of love (whether romantic or platonic) in two out of the five ways. For example, I’m all about words of affirmation (#1) and gifts (#3). That’s what I give out by default because that’s what I like to take in.

The reason the love languages approach is helpful is to realise that others might need something different to what you by default offer them. This particular article on the TED website is interesting because it was written during pandemic lockdowns and so gets creative with ways in which they can be expressed at distance.

What I find so helpful about love languages is that they express a basic truth. Implicit to the concept is a common-sense idea: We don’t feel or experience love in the same way. Some of us will only be content when we hear the words “I love you,” some prize quality time together, while some will feel most cared for when our partner scrubs the toilet.

In this way, love is a bit like a country’s currency: One coin or bill has great value in a particular country, less value in the countries that border it, and zero value in many other countries. In relationships, it’s essential to learn the emotional currency of the humans we hold dear and identifying their love language is part of it.

Love language #1: Words of affirmation

Those of us whose love language is words of affirmation prize verbal connection. They want to hear you say precisely what you appreciate or admire about them. For example: “I really loved it when you made dinner last night”; “Wow, it was so nice of you to organize that neighborhood bonfire”; or just “I love you.”

[…]

Love language #2: Acts of service

Some of us feel most loved when others lend a helping hand or do something kind for us. A friend of mine is currently going through chemotherapy and radiation, putting her at high risk for COVID-19 and other infections. Knowing that her love language is acts of service, a group of neighbor friends snuck over under the cover of darkness in December and filled her flower pots in front of her house with holiday flowers and sprigs. Others have committed to shoveling her driveway all winter. (It’s Minnesota, so that’s big love.)

[…]

Love language #3: Gifts

Those of us whose love language is gifts aren’t necessarily materialistic. Instead, their tanks are filled when someone presents them with a specific thing, tangible or intangible, that helps them feel special. Yes, truly, it’s the thought that counts.

[…]

Love language #4: Quality time

Having another person’s undivided, dedicated attention is precious currency for the people whose love language is quality time. In a time of COVID-19 and quarantining, spending quality time together can seem challenging. But thanks to technology, it’s actually one of the easiest to engage in.

[…]

Love language #5: Physical touch

Expressing the language of physical touch can be as platonic as giving a friend an enthusiastic fist-bump when she tells you about landing an interview for a dream job or as intimate as a kiss with your partner to mark the end of the workday.

[…]

Love languages are a worthwhile concept to become fluent in during this pandemic time — and at this time in the world. Long before COVID arrived on the scene, we were already living through an epidemic of loneliness. Loneliness is not just about being alone; it’s about experiencing a lack of satisfying emotional connections. By taking the time to learn each other’s love languages and then using them, we can strengthen our relationships and our bonds to others.

Source: Do you know the 5 love languages? Here’s what they are — and how to use them | TED

Aristotle diagnoses our current political problems

New Philosopher magazine cover (issue #41: Conflict)

The latest issue of New Philosopher magazine is about conflict. As usual, they quote a philosopher on the subject, in this case Aristotle in his Politics.

I studied Philosophy as an undergraduate and therefore read a lot of Aristotle. But it's been a couple of decades and I haven't gone back to him much inbetween. I tend to prefer the pre-Socratics.

Last week, I posted about Yuval Noah Harari talking about the post-truth revolutionary right. The quotation below from Aristotle is probably best read in that light: our current political situation in the west seems to spring from a combination of gaslighting and victim-blaming.

Now, in oligarchies the masses make revolution under the idea that they are unjustly treated, because, as I said before, they are equals, and have not an equal share, an in democracies the notables revolt, because they are not equals, and yet have only an equal share

Source: New Philosopher #41: Conflict

The rolling drama of the climate crisis just got a whole lot worse

It’s massively concerning that, although scientists seem to understand why the earth has been warming due to climate change over the last few decades, they don’t seem to know why there’s all of a sudden been a huge spike.

I just hope it’s not something like methane being released from permafrost, because then we are all completely shafted.

Chart showing huge spike in temperature
Global temperatures soared to a new record in September by a huge margin, stunning scientists and leading one to describe it as “absolutely gobsmackingly bananas”.

The hottest September on record follows the hottest August and hottest July, with the latter being the hottest month ever recorded. The high temperatures have driven heatwaves and wildfires across the world.

September 2023 beat the previous record for that month by 0.5C, the largest jump in temperature ever seen. September was about 1.8C warmer than pre-industrial levels. Datasets from European and Japanese scientists confirm the leap.

The heat is the result of the continuing high levels of carbon dioxide emissions combined with a rapid flip of the planet’s biggest natural climate phenomenon, El Niño. The previous three years saw La Niña conditions in the Pacific Ocean, which lowers global temperature by a few tenths of a degree as more heat is stored in the ocean.

[…]

The scientists said that the exceptional events of 2023 could be a normal year in just a decade, unless there is a dramatic increase in climate action. The researchers overwhelmingly pointed to one action as critical: slashing the burning of fossil fuels down to zero.

Source: ‘Gobsmackingly bananas’: scientists stunned by planet’s record September heat | The Guardian

Five kinds of friends

Anyone who’s read Montaigne’s Essays will probably be slightly jealous of his friendship with Étienne de La Boétie. The latter tragically passed away at the age of 32, something that Montaine, it seemed, never fully got over. I’ve never had a friend like that. I doubt many men have.

This article from sociologist Randall Collins talks about five different types of friendship. I’ve got plenty of ‘allies’, some ‘backstage intimates’, and ‘mutual-interests friends’. I definitely lack, mainly out of choice ‘fun friends’ and ‘sociable acquaintances’.

It would be interesting to learn more about the history and sociology of friendship. This article goes a little bit into the realm of social media friends, but I’m not sure you can learn much about just studying the medium. That reminds me of a Douglas Adams quote I can’t quite find but goes something along the lines of people always talking about terrorists planning things “over the internet” but would never talk about them planning it “over a cup of tea”.

Hands
Allies:  talking about money; asking for loans; asking for letters of reference, endorsements, asking to contact further network friends for jobs or investments. In specialized fields like scientific research, talking about what journals or editors to approach, what topics are hot, giving helpful advice on drafts. In art and music: gossiping about who’s doing what, contacts with agents, galleries, venues.

Backstage intimates: Speaking in privacy; taking care not to be overheard. Don’t tell anybody about this.

Fun friends: Shared laughter, especially spontaneous and contagious. Facial and body indicators of genuine amusement, not forced smiles or saying “that’s funny” instead of laughing. Very strong body alignment, such as fans closely watching the same event and exploding in synch into cheers or curses.

Mutual-interests friends: talking at great length about a single topic. Being unable to tear oneself away from an activity, or from conversations about it.

Sociable acquaintances: General lack of all of the above, in situations where people expect to talk with each other about something besides practical matters (excuse me, can I get by?) Banal commonplace topics, the small change of social currency: the weather; where are you from; what do you do; foreign travels; do you know so-and-so? Answers to “how are you doing?” which avoid giving away information about one’s problems or matters of serious concern. Talking about politics can be conversational filler (when everyone assumes they’re in the same political faction), as often happens at the end of dinner parties when all other topics have been exhausted.

Source: FIVE KINDS OF FRIENDS | The Sociological Eye

Image: Pixabay

Anxiety, deadness, and aggression

I can’t quite remember where I came across this article, but I’ve subscribed to the online magazine that it’s from, as it seems interesting.

The article itself explores, in quite a dense way, the psychological and societal aspects of labour, particularly in terms a capitalist framework. The author, Timofei Gerber, who is co-founder and co-editor of Epoché Magazine, argues that workers are alienated from the productive part of their labour. This leads to a cycle of dissatisfaction and unfulfilled potential.

Workers' alienation, he argues, is rooted in societal structures that prohibit the free flow of libidinal, or life-affirming, energy. Society therefore perpetuates a cycle of anxiety, deadness, and aggression, which further disconnects individuals from their creative and productive selves.

Well, I mean, it’s a theory. Reading this article felt a lot like reading Hegel’s Phenomenology of Spirit, to be honest. A slog. 🥱

 

The desire of individuals to be productive, to be free and to be responsible for their lives, rejects all models of control, all hierarchy, all suppression. The individual that experiences pleasure, that is productive, is productive in all aspects of its life, it takes responsibility for its actions, and is therefore a very insubordinate subject. We have seen how our concept of labour is built on the model of hunger, and what consequences that has. The prohibition of pleasure has therefore but one function: to produce obedient subjects, which do not question the current order, and which do not desire to change the world. As the model of sexuality is rejected, the only accepted way towards satisfaction is based on the model of hunger: the constant need to fill the emptiness inside by succumbing to consumer society. It is for this reason that for Reich, the liberation of sexuality was of primary importance. It is true that people hunger and are suffering materially; but the reason for this does not originate from the sphere of hunger, it is not a physical necessity. Scarcity itself is artificially produced, an artificial hunger and emptiness that results from the blockage of the inherent productivity of life. And we accept this state of things because of our pleasure anxiety, because we are afraid of our own responsibility and freedom.
Source: Wilhelm Reich on Pleasure and the Genesis of Anxiety | Epoché Magazine

Microcast #100 — Awkward Conversations


Instead of avoiding difficult conversations, aim to make them less awkward. Here's one way.

Show notes


Image: Unsplash

Different levels of reading (technologies)

This post by author Nick Harkaway was shared by Warren Ellis in his most recent newsletter. It’s something that my wife and I have talked about recently, as she tends to print everything out to read.

I do occasionally, but only for things I want to read really closely. In fact, I’ve got three levels: deep (paper), medium (e-ink), and shallow (screen). Most of the work that I do doesn’t require super-close reading of the text but rather the general gist of what’s going on. I’ve got an A4-sized ereader so it’s easy to put stuff on there.

Previously, I have printed out things. For example, I printed out my doctoral thesis and put it on the windows of the Jisc offices to make tiny corrections when I was almost ready for submission. I think this is entirely OK and normal.

What I really want is a laptop screen where I can switch between a regular screen and e-ink. Something like this.

There’s a sense of reality in printing (and reading on paper) a finished novel. In theory, you can go through an entire creative effort without ever producing paper on your desktop, but for me there’s a separate space of “tangible book” which has a particular moment and a set of uses. This morning I printed the first two chapters to look at, and aside from the sense of pleasure in seeing a physical manifestation of work done (in this instance a sort of echo, because I held the whole book in A4 recycled a while ago) there’s a difference between words on screen and words on paper.

Holding paper, I notice different things. The work feels different - different tonal issues arise, new sections I need to rewrite. It’s akin to - but different again from - reading a book aloud and hearing the cadences, the unintentional repetitions and homonyms, the blunt force wrongness of an unmodified word. The text is not different, but the experience is, and of course it’s still the paper experience of my book that most people will have. (I think - a couple of my books were bigger sellers as ebooks than paper in some markets, but as far as I know, perhaps even moreso now than a few years ago, paper remains on the throne.)

There are actual science reasons why analogue reading is different - and as the writing process at this point is founded on reading and re-reading, those aspects must be interwoven with the creative edit, irrespective of whether the creative process of itself works differently in the brain depending on the medium in which it is iterated. Whether it’s an inherent quality in the combination of tactile experience and inert text, or whether it’s contingent on my knowledge that digital text is both infinitely editable and subject to sudden interruption when my desktop decides to notify me of something, I find there’s a placidity and a sense of authenticity in the work. I’m always wary of mystifying the tree’s presence in the printed book or the long inheritance of paper, but - be it a societal form or something more fundamental - paper feels more “in the world”.

Source: The Print | Fragmentary

Perhaps switch to another search engine?

I use a lot of Google products. I’m typing this on a laptop on which I’ve installed ChromeOS Flex, I use Google Workspace at work, I’ve got a Google Assistant device in every room of our house, and now even my car has an infotainment system with it built in.

But I do take some precautions. I don’t use Google Search. I turn off my web history, watching history on YouTube, opt out of personalisation, and encrypt my Chrome browser sync with a password.

This article doesn’t surprise me, because Google’s core business is advertising. It’s still creepy though.

There have long been suspicions that the search giant manipulates ad prices, and now it’s clear that Google treats consumers with the same disdain. The “10 blue links,” or organic results, which Google has always claimed to be sacrosanct, are just another vector for Google greediness, camouflaged in the company’s kindergarten colors.

Google likely alters queries billions of times a day in trillions of different variations. Here’s how it works. Say you search for “children’s clothing.” Google converts it, without your knowledge, to a search for “NIKOLAI-brand kidswear,” making a behind-the-scenes substitution of your actual query with a different query that just happens to generate more money for the company, and will generate results you weren’t searching for at all. It’s not possible for you to opt out of the substitution. If you don’t get the results you want, and you try to refine your query, you are wasting your time. This is a twisted shopping mall you can’t escape.

Why would Google want to do this? First, the generated results to the latter query are more likely to be shopping-oriented, triggering your subsequent behavior much like the candy display at a grocery store’s checkout. Second, that latter query will automatically generate the keyword ads placed on the search engine results page by stores like TJ Maxx, which pay Google every time you click on them. In short, it’s a guaranteed way to line Google’s pockets.

Source: How Google Alters Search Queries to Get at Your Wallet | WIRED

Climate havens

I grew up in an ex-mining town, surrounded by ex-mining villages. At one point in my teenage years, I can distinctly remember wondering why people continued to live in such places once the reason for its existence had gone?

Now I’m an adult, of course I realise the many and varied economic, social, and emotional reasons. But still, the question remains: why do people live in places that don’t support a flourishing life?

One of the reasons that politicians are turning up the anti-immigration at the moment is because they’re well-aware of the stress that our planet is under. As this article points out, even if we reach net zero by 2050, the amount of carbon in the atmosphere means that some places are going to be uninhabitable.

That’s going to lead not only to international migration, but internal migration. We need to be preparing for that, not just logistically, but in terms of winning hearts and minds.

In 2022, climate change and climate-related disasters led nearly 33 million people to flee their homes and accounted for over half of all new numbers of people displaced within their countries, according to data from the United Nations’ High Commissioner for Refugees and the Internal Displacement Monitoring Centre. This amount will surely increase over the next few decades.

Outside the United States and Canada, the World Bank predicts that climate change will compel as many as 216 million people to move elsewhere in their countries by 2050; other reports suggest that more than one billion people will become refugees because of the impacts of a warming planet on developing countries, which may exacerbate or even precipitate civil wars and interstate armed conflict.

[…]

The extraordinary pressure that continued international and domestic climate migration will impose upon state resources and social goods like schools, hospitals and housing is difficult to fathom. Over the past year, city and state governments in the U.S. have feuded over the distribution of migrants stemming from the Southern border, with New York Mayor Eric Adams declaring that the current migration wave will “destroy” the city.

[…]

The stark fact is that the amount of carbon dioxide already amassed in the atmosphere all but assures that certain zones will become uninhabitable by the end of the century, regardless of whether global greenhouse gas emissions reach net zero by 2050. If factories cannot operate at full capacity due to life-threatening climate conditions, periodic grid failures and difficult-to-replace labor shortages over the next two decades — and these challenges reverberate throughout their surrounding economies — the output of the renewables sector will falter and stall projects to decarbonize businesses, government agencies and households.

Source: The U.S. Government Should Push People To Move To Climate Havens | Noema

In the long run, people can only treat you the way you let them 

This blog post, which I discovered via Hacker News, is about ultimatums around ‘return to office’ mandates/ultimatums. But it’s also a primer to only allow people to treat you the way you want to be treated.

People who abuse any power they have over you aren’t worth respecting and definitely aren’t worth hanging around. Although sometimes it’s difficult to realise it, the chances are that you’re bringing the talent to the table, which is why they acting in a way fueled by insecurity.

If I had to give only one bit of advice to anyone ever faced with an ultimatum from someone with power over them (be it an employer or abusive romantic partner), it would be:

Ultimately, never choose the one giving you an ultimatum.

If your employer tells you, “Move to an expensive city or resign,” your best move will be, in the end, to quit. Notice that I said, ‘in the end’.

It’s perfectly okay to pretend to comply to buy time while you line up a new gig somewhere else.

That’s what I did. Just don’t start selling your family home or looking at real estate listings, and definitely don’t accept any relocation assistance (since you’ll have to return it when you split).

Conversely, if you let these assholes exert their power over you, you dehumanize yourself in submission.

Source: Return to Office Is Bullshit And Everyone Knows It | Dhole Moments

More on the vagus nerve (and exercise)

I mentioned a few weeks ago how researchers have been trying to electrically stimulate the vagus nerve, which is now thought to help treat everything from anxiety to depression.

In this study, researchers from the University of Auckland found that the vagus nerve, plays a significant role during exercise. Contrary to the prevailing understanding that only the ‘fight or flight’ nervous system is active during exercise, this study shows that activity in the vagus nerve actually increases. This helps the heart pump blood more effectively, supporting the body’s increased oxygen needs during exercise.

Interestingly, especially for people I know who have heart failure, they also  identified that the vagus nerve releases a peptide which helps dilate coronary vessels. This allows more blood to flow through the heart.

The vagus nerve, known for its role in 'resting and digesting,' has now been found to have an important role in exercise, helping the heart pump blood, which delivers oxygen around the body.

Currently, exercise science holds that the ‘fight or flight’ (sympathetic) nervous system is active during exercise, helping the heart beat harder, and the ‘rest and digest’ (parasympathetic) nervous system is lowered or inactive.

However, University of Auckland physiology Associate Professor Rohit Ramchandra says that this current understanding is based on indirect estimates and a number of assumptions their new study has proven to be wrong. The work is published in the journal Circulation Research.

“Our study finds the activity in these ‘rest and digest’ vagal nerves actually increases during exercise,” Dr. Ramchandra says.

[…]

There is a lot of interest in trying to ‘hack’ or improve vagal tone as a means to reduce anxiety. Investigating this was outside the scope of the current study. Dr. Ramchandra says we do know that the vagus mediates the slowing down of heart rate and if we have high vagal activity, then our hearts should beat slower.

“Whether this is the same as relaxation, I am not sure, but we can say that regular exercise can improve vagal activity and has beneficial effects."

Source: Vagus nerve active during exercise, research finds | Medical Xpress

University is about more than jobs and earning power

Next month, I embark on my fourth postgraduate qualification: an MSc in Systems Thinking in Practice. I also believe that alternative credentials such as Open Badges are valuable. That’s because the answer to an ‘either/or’ question is usually ‘yes/and’.

So I have sympathy with this article which talks about potentially going too far in discouraging people from going to university. What’s missing from this piece, as usual with these things, is that Higher Education isn’t just about earning power. It’s about expanding your mind, worldview, and experiences.

I got involved with Open Badges 12 years ago because I wanted my kids to have the option of going to university, rather than it being table-stakes for a decent job. We’re not quite there yet, but we’re a lot closer than we used to be. It’s a delicate balance, because I don’t want a liberal education to be the preserve of a wealthy elite.

Students
Wages grow faster for more-educated workers because college is a gateway to professional occupations, such as business and engineering, in which workers learn new skills, get promoted, and gain managerial experience. Most noncollege workers, in contrast, end up in personal services and blue-collar occupations, for which wages tend to stagnate over time.

[…]

Despite the bad vibes around higher education, the fastest-growing occupations that do not require a college degree are mostly low-wage service jobs that offer little opportunity for advancement. Negative public sentiment might dissuade some people from going to college when it is in their long-run interest to do so. The potential harm is greatest for low- and middle-income students, for whom college costs are most salient. Wealthy families will continue to send their kids to four-year colleges, footing the bill and setting their children up for long-term success.

Indeed, highly educated elites in journalism, business, and academia are among those most likely to question the value of a four-year degree, even if their life choices don’t reflect that skepticism. In a recent New America poll, only 38 percent of respondents with household incomes greater than $100,000 said a bachelor’s degree was necessary for adults in the U.S to be financially secure. When asked about their own family members, however, that number jumped to 58 percent.

Source: The College Backlash Is Going Too Far | The Atlantic

Image: Good Free Photos

Intelligent failure

Andrew Curry links to Amy Edmondsen’s new book about ‘intelligent failure’. She’s also got a recorded talk from the RSA on the same topic which I’ve queued up to watch.

Although some people who have sat through teacher in-service training days may beg to differ, there’s no such thing as wasted learning. It’s all grist to the inspiration mill, and I’m always surprised at how often insights are generated between unexpected overlaps.

This, though, isn’t about serendipity, but rather about goal-directed behaviours to reach an outcome. Which pre-supposes, of course, that we’re working towards a goal. In these times of rolling catastrophe, it’s worth remembering that having goals is something that used to be normal.

We are all taught these days that failure is an essential part of learning, and that we need to fail if we want to develop as people. But it’s one thing to hear that, and another thing to be able to do it. Because we have all grown up in education systems where failure is bad, and worked for organisations where failure gets punished in a whole range of less-then-explicit ways.

So it is interesting to see Amy Edmondsen writing about “intelligent failure” on the Corporate Rebels blog. She has just published a book on this theme.

[…]

The first part of this is to know that there are different kinds of failures. The set of things that are included in “intelligent failures” does not include failures that happened because you couldn’t be bothered. But it does include failures that happen as a result of complexity or bad luck.

So by working hard to prevent avoidable failures, they are able to embrace the other ones.

Edmondsen has developed a model from her research about intelligent failure which the Corporate Rebels turned into one of their distinctive graphics. Here are her four criteria:

It (1) takes place in new territory (2) in pursuit of a goal, (3) driven by a hypothesis, and (4) is as small as possible. Because they bring valuable new information that could not have gained in any other way, intelligent failures are praiseworthy indeed.

Source: Energy | Failing - by Andrew Curry

Falling asleep on the couch watching films

I can count on the fingers of no hands the number of times I’ve fallen asleep watching a film at home. I have, however, fallen asleep watching one at the cinema.

This is perhaps for three reasons. First, I usually wear contact lenses, but not when I’m in the cinema. Second, because my wife and I can’t seem to watch a film at home without pausing it half a dozen times. Third, because I’d rather read than watch a film.

So, yeah, this article isn’t for me. But I’m sharing it because I can’t really get into the mindset of someone for whom this is a problem.

I’ve watched the first half of a billion movies. This is how a typical movie night goes for me: After eating too many fries from Rocketbird and washing it down with a couple of beers, I’m swaddled in a plush blanket, horizontal on the couch, and zonked out long before Michelle Yeoh reaches the hotdog finger scene in Everything Everywhere All at Once.

Maybe your schedule is hectic, but you still want to catch every twist and turn in the Glass Onion movie. Or perhaps your significant other’s date-night selection seems like a snoozefest, and you’re attempting to roll credits on Morbius. Whatever your reason is to stay awake, keep the following advice in mind the next time you’re streaming something at home.

Source: How to Stop Falling Asleep on the Couch During Movies | WIRED

Yuval Noah Harari on the post-truth revolutionary right

Friend and collaborator Bryan Mathers recommended this episode of The Rest is Politics: Leading to me. While I’m a regular listener to the main podcast, which features only Alastair Campbell and Rory Stewart, I hadn’t previously bothered with the ones where they interview others.

This one with Yuval Noah Harari is great. It’s the second part of a two-part interview. In the first, recorded in August, Harari talks about the situation in Israel. In this second one, he zooms out a bit to talk about politics more generally, AI, and society.

The thing that struck me, about 5-10 minutes in, was his point about the left and right of politics not making sense any more. That’s something that others have said before. But his analysis was fascinating: the right has largely abandoned the role of being guardians of tradition to weaponise ‘truth’, which has led to the left being in the awkward position of custodian. That’s why everything feels topsy-turvy.

(also, I’m really pleased to have discovered pod.link to share podcast episodes in a non-platform-specific way as easily as song.link)

(also also, I found out about a podcast search engine call Listen Notes recently!)

The Rest is Politics: Leading
Rory Stewart and Alastair Campbell, hosts of Britain's biggest podcast (The Rest Is Politics), have joined forces once again for their new interview podcast, ‘Leading’.Every Monday, Rory and Alastair interrogate, converse with, and interview some of the world's biggest names - from both inside and outside of politics - about life, leadership, or leading the way in their chosen field.Whether they're sports stars, thought-leaders, presidents or internationally-recognised religious figures, Alastair and Rory lift the lid on the motivation, philosophy and secrets behind their career.Tune in to 'Leading' now to hear essential conversation from some of the world's most enthralling individuals.
Source: The Rest is Politics: Leading

Microcast #99 — EVs


Reflections on taking delivery of an electric vehicle (EV), including charging, business lease, and other rambling thoughts.

Show notes


Image: taken by me on my first run-out to Druridge Bay

Songs are not meme stocks

Remember NFTs? This article in The Guardian will help remind you of the heady days of early 2022 when digital images of monkeys were apparently extremely valuable. That article ends with a question: “what will the next NFT be? When will it drop? How much money will normal people end up spending on it?”

Here’s one answer: owning a slice of your favourite song. Or perhaps a popular song. Or an up-and-coming song. It’s essentially applying capitalism at the very smallest level possible, and treating cultural artifacts as commodities.

The article below in WIRED discusses a platform which offers this as a service. It’s a terrible idea on many levels, not least because, as we’ve seen recently, AI-generated music is tearing fandoms apart. I’ll sit this one out, thanks.

Imagine a retirement portfolio stocked with Rihanna hits, or a college fund fueled by Taylor Swift’s 1989. In a post-GameStop, post-NFT-mania world, it sounds plausible enough. Wholesome, even.

A new music royalties marketplace, Jkbx (pronounced “jukebox”), launched this month and plans to officially open for trading later this year. It has filed an application with the US Securities and Exchange Commission and is waiting for notice that the SEC has qualified its offerings. As long as that goes according to plan, Jkbx—god, why no vowels?—will allow fans to buy “royalty shares,” or fractionalized portions of royalties, fees, and other income associated with a particular song. Prices are within reach of regular people. One share of composition royalties for Beyoncé’s “Halo,” for example, is $28.61. You could also buy a slice of the song’s sound recording royalties for the same price.

[…]

Jkbx is debuting with some big-name slices, and is led by a guy with a good track record. “They are very sophisticated,” Round Hill Music founder and CEO Josh Gruss says. “The real deal.” Others agree. “We think they are going to be successful,” Hipgnosis Songs CEO and founder Merck Mercuriadis says.

Still, plenty of industry analysts and insiders view Jkbx, and the larger world of royalty trading, warily. “I think there are going to be very modest levels of return,” says Serona Elton, a music industry professor at the University of Miami.

“There is skepticism about how good of an alternative investment strategy something like this is,” musician and data analyst Chris Dalla Riva says.

“I don’t understand why people keep trying to spin this idea up,” adds producer and music tech researcher Yung Spielburg. “I just don’t get it.”

Source: The Next Meme Stock? Owning a Slice of Your Favorite Song | WIRED

Adversarial interoperability to return to a world of 'fast companies'

Cory Doctorow is one of my favourite people on the entire planet. I’ve heard him speak in person and online on numerous occasions. I met him a couple of times while at Mozilla, and he’s even recommended swimming pools in Toronto to me when I visited. (He’s a daily swimmer due to chronic back pain.)

His new book, which I’m saving to read for my next holiday, is The Internet Con: How to Seize the Means of Computation. In this interview as part of promoting the book, he talks about how we’ve ended up in a world without real competition in the technology marketplace. Essential reading, as ever.

There used to be a time when the tech sector could be described as a bunch of “fast companies,” right? They would use the interoperability that’s latent in all digital technology and they would specifically target whatever pain points the incumbent had introduced. If incumbents were making money by showing you ads, they made an ad blocker. If incumbents were making money by charging gigantic margins on hard drives, they made cheaper hard drives.

Over time, we went from an internet where tech companies more or less had their users’ backs, to an internet where tech companies are colluding to take as big a bite as possible out of those users. We do not have fast companies anymore; we have lumbering behemoths. If you’ve started a fast company, it’s probably just a fake startup that you’re hoping to get acqui-hired by one of the big giants, which is something that used to be illegal.

As these companies grew more concentrated, they were able to collude and convince courts and regulators and lawmakers that it was time to get rid of the kind of interoperability, the reverse engineering that had been a feature of technology since the very beginning, and move into a new era in which no one was allowed to do anything to a tech platform that their shareholders wouldn’t appreciate. And that the government should step in to use the state’s courts to punish anyone who disagrees. That’s how we got to the world that we’re in today.

Source: Cory Doctorow: Silicon Valley is now a world of ‘lumbering behemoths’ | Fast Company

Sycamore Stump

There is, or rather was, a tree that symbolised the North East of England. Standing at a dip in the ground along Hadrian’s Wall called ‘Sycamore Gap’, it’s a tree I’ve visited many times with friends and family. Last year, when I walked the wall in 72 hours, it was a familiar touchstone.

Now the iconic 200 year-old tree, which featured in the film Robin Hood: Prince of Thieves, is gone. Felled by a 16 year-old in an act of wanton vandalism. On a World Heritage Site. Some people just want to watch the world burn.

It didn’t take long for someone to rename the place on Google Maps where the tree used to stand to ‘Sycamore Stump’. Hopefully they will build some kind of memorial to it. I do think it’s difficult for someone not from the region to understand just how important things like this are to one’s identity.

A 16-year-old boy has been arrested in northern England in connection with what authorities described as the “deliberate” felling of a famous tree that had stood for nearly 200 years next to the Roman landmark Hadrian’s Wall.

[…]

Photographs from the scene on Thursday showed the tree was cut down near the base of its trunk, with the rest of it lying on its side.

Northumbria Police said the teen was arrested on suspicion of causing criminal damage. He was in police custody and assisting officers with their inquiries.

[…]

“This is an incredibly sad day,” police Superintendent Kevin Waring said. “The tree was iconic to the North East and enjoyed by so many who live in or who have visited this region.”

Source: ‘Incredibly sad day’: Teen arrested in England after felling ancient tree | Al Jazeera

Image: Oli Scarff / Agence France-Presse (taken from NYT article)

Please consider stopping eating animals

I don’t know how many people reading this are vegans or vegetarians. I was a pescetarian from October 2017 to January 2020 and then, since then, stopped eating fish too.

The reason I stopped eating meat was because of an article about the number of chickens that are killed each day. You can say that the actual death is painless, but they’re reared to have terrible lives, and many of them are electrocuted to death because it’s the cheapest method.

I’m sorry if this is shocking to you, but it’s even more shocking to the chickens. Eating meat is bad for your long-term health, bad for the environment, and ethically dubious. I’m not particularly interested if you agree right now, but I’d like you consider whether you’re on the right side of history.

My plan is to eventually turn vegan. I’ve replaced most of my milk consumption, including in tea and coffee, with oat milk.

The scale of humanity’s meat consumption is enormous. 360 million tonnes of meat every year.

This number is so large that I find it impossible to comprehend. What helps me to make these numbers more relatable is to turn them from the weight of meat to the number of animals and from the yearly total to the daily number. This is what I have done in the graphic below. It shows how many animals are slaughtered on any average day.

About 900,000 cows are slaughtered every day. If every cow was 2 meters long, and they all walked right behind each other, this line of cows would stretch for 1800 kilometers. This represents the number of cows slaughtered

For chickens, the daily count is extremely large – 202 million chickens every day. To comprehend the scale, it is better to bring it down to the average minute: 140,000 chickens are slaughtered every minute.

The number of fish killed every day is very uncertain. I discuss this in some detail at the end of this article. But while the uncertainties are large, it is clear that the number of fish killed is large: certainly, hundreds of millions of fish are killed every day.

If you believe that the slaughter of animals causes them to suffer and attribute even a small measure of ethical significance to their suffering, then the moral scale of this reality is immense.

Source: How many animals get slaughtered every day? | Our World in Data

'Personalisation' is something that humans do

Audrey Watters, formerly the ‘Cassandra’ of edtech, is now writing about health, nutrition, and fitness technologies at Second Breakfast. It’s great, I’m a paid subscriber.

In this article, she looks at the overlap between her former and current fields, comparing and contrasting coaches and educators with algorithms. While I don’t share her loathing of ChatGPT, as an educator and a parent I’d definitely agree that motivation and attention is something to which a human is (currently) best suited.

How well does a teacher or trainer or coach know how you feel, how well you performed, or what you should do or learn next? How well does an app know how you feel, how well you performed, or what you should do next? Digital apps insist that, thanks to the data they collect, they can make better, more precise recommendations than humans ever can — dismissing what humans do as “one size fits all.” Yet it's impossible to scrutinize their algorithmic decision-making. Ideally, at least, you can always ask your coach, "Why the hell am I doing bulgarian split squats?! These suck." And she will tell you precisely why. (Love you, Coach KB.)

And then (ideally) she’ll say, “If you don’t want to do them, you don’t have to.” And (ideally), she’ll ask you what’s going on. Maybe you feel like shit that day. Maybe you don’t have time. Maybe they hurt your hamstrings. Maybe you’d like to hear some options — other exercises you can do instead. Maybe you’d like to know why she prescribed this exercise in the first place — “it’s a unilateral exercises, and as a runner,” she says, “we want to work on single-leg strength, with a focus on your glute medius and adductors because I’ve noticed, by watching your barbell squats, that those areas are your weak spots.” This is how things get “personalized” — not by some massive data extraction and analysis, but by humans asking each other questions and then tailoring our responses and recommendations accordingly. Teachers and coaches do this every. goddamn. day. Sure, there’s a training template or a textbook that one is supposed to follow; but good teachers and coaches check in, and they switch things up when they’re not really working.

[…]

If we privilege these algorithms, we’re not only adopting their lousy recommendations; we’re undermining the expertise of professionals in the field. And we’re not only undermining the expertise of professionals in the field, we’re undermining our own ability to think and learn and understand our own bodies. We’re undermining our own expertise about ourselves. (ChatGPT is such a bad bad bad idea.)

Source: Teacher/Coach as Algorithm | Second Breakfast

Migraines and 'ability'

Granted it’s been over a decade, but when I worked at a university I had to be on the ‘disabled’ register due to my migraines. That meant my line manager could make accommodations such as being sat next to a window so the fluorescent lights didn’t trigger me.

Almost everyone I know has some kind of medical condition which affects their work to a greater or lesser extent. These are the things that we used to hide, until we realised (perhaps for the first time during the pandemic) that we’re all just temporarily abled.

This letter in The Guardian is a response to an article about a minister setpping down due to chronic migraines. I don’t get 15 or more a month, as she does, but I probably average 3-4 and, because they add up, it’s imperative that I have flexible working conditions. But then, shouldn’t we all?

Dehenna Davison has resigned as a minister, citing chronic migraines (Report, 18 September). Migraines are a common and debilitating condition affecting many people; chronic migraine is defined as an ongoing experience of 15 or more migraine days a month. So it is not difficult to imagine how hard it has been for Ms Davison to give the energy she wants to her role.

But while it is valuable that chronic migraines have been given some media attention, it is also troubling that the message is, unfortunately, that those with such conditions do not have equal value and should quit if they can’t manage the job – a message that many people living with migraines and other long-term conditions and disabilities will be familiar with, whatever their role or employer.

Managing work, life and migraines takes more than the “patience at times” that Davison thanks her colleagues for. It needs recognition, respect and a commitment from employers to prioritise the health of workers and support them to work with the condition, not drop back because of it.

Anna Martin Oxford

Source: People living with migraines need better support from employers | The Guardian

No career progression on a dead planet

There’s a film starring Matt Damon called Elysium from 2013 in which the wealthy live on a man-made space station in luxury, while the rest of the population live on a ruined Earth. With the latest announcement about a new huge oilfield being opened in the North Sea, the obscene desire for global elites to put profit before planet is clear for all to see.

As we hurtle towards this scenario, many have realised that there is no longer any link between meaningful work, a decent salary, and a fulfilling life.

Person sitting on a hillside
James, a 31-year-old in Glasgow, had always worked hard, from striving for a first at university to working until 8pm or 9pm at the office in the civil service in the hopes of getting noticed.

But during lockdown in 2020, James had an epiphany about what he valued in life when reading the book Bullshit Jobs by the anthropologist David Graeber. “He talks a lot about how jobs that provide social utility are generally pay-poor while the inverse are paid more,” James says.

James felt he was working doggedly – but not necessarily either generating public good or building a stable financial life. “It felt futile … You can work really hard and you’re still not going to get ahead,” he says.

“Salaries and housing costs are so mismatched at this point that you would really need to jump ahead in your career to be able to buy in parts of the country. Not that [owning property] is the be-all and end-all, but it’s kind of a foundation to having financial stability.”

He now focuses on his life, putting his phone on aeroplane mode while doing activities such as hiking, reading and watching films. “I still value work, I’m very committed to my position. But I’ve just realised that this myth a lot of millennials were told – graft, graft, graft and you’ll always get what you want – isn’t necessarily true,” James says. “It’s a reprioritisation.”

Social mobility in the UK is at its worst in more than 50 years, a recent study from the Institute for Fiscal Studies found, with children from poor households finding it harder than 40 years ago to move into higher income brackets. The IFS said gifts and inheritances from older generations were becoming more important to household incomes.

Source: ‘It felt futile’: young Britons swap career-driven lives for family and fun | The Guardian

AI generated images with subliminal messages

You’ve probably seen some of these already. Someone discovered that if you use the generator for QR codes but feed it something different, it can create words from images.

There are lots more examples at knowyourmeme and you can try creating your own using KREA.

Nike subliminal AI images

ControlNet uses the AI image-generating tool Stable Diffusion, and one of its initial uses was generating fancy QR codes using the code as an input image. That idea was then taken further, with some users developing a workflow that lets them specify any image or text as a black-and-white mask that implants itself into the generated image—kind of like an automated, generative version of the masking tool in Photoshop.

“What happened there was that this user discovered that if they used the QR Code ControlNet but instead of feeding it a QR code, they fed it some other black-and-white patterns, they could create nice optical illusions,” said Passos. “You can now send a conditioning image and the model blends in a pattern that satisfies that while still making a coherent image at the same time.”

Source: AI-Generated ‘Subliminal Messages’ Are Going Viral. Here’s What’s Really Going On | VICE

On preparing, issuing, and claiming badges

I attended a Navigatr webinar at lunchtime today where they shared this graphic which underscores the importance of encouraging badge earners to share their achievements on social networks.

What I appreciated about the webinar was the way in which the team explained the importance of preparing for and then following up the issue of the badge to ensure that it’s claimed.

Our study of several recently shared digital badges on social media as shown below showed that on average, a posted badge received 500-1k impressions and 25 interactions, of which, 4-5 were actual comments.  We found that the number of connections and days since posted lead to increases in the number of interactions.  Engagement seemed to plateau around 4-5 days and those with several hundred to 500+ connections were most likely to receive numerous interactions.  Location – whether the US or abroad did not seem to matter, suggesting the power of social media is universal when it comes to engagement.
Source: Improve Brand Engagement with Digital Badges | BadgeCert

Telling stories using cartoons

Liza Donnelly is a cartoonist for the New Yorker. In this article, which is an output from some preparatory work for a talk she’s preparing, she talks about how the best cartoons work.

I’ve had the privilege of working with Bryan Mathers over the last decade and it really is a fascinating process. In fact, he’s just delivered a bunch of artwork for the work we’re doing around Open Recognition. Check it out here!

New Yorker cartoon
Story is everywhere. In single panel cartoons, they have to be kept in one image. It’s tricky and challenging and I love it. I like to say that a single panel cartoon is like a mini stage. The artist is a set designer, choreographer, script writer, costume designer, casting director. Each element in the drawing needs to be necessary for the idea, no more, no less; there are exceptions of course. Some creators are known for a style that is overly detailed and complicated, and that is part of the voice of the artist and contributes to the story. The image is a moment in time, and you have to feel that there is time before the moment you see, and a continuation after that moment. And the characters are well “described” in the execution.

[…]

Bottom line: story in the best New Yorker cartoons tell us a story about the characters that are in the drawing, and about ourselves. This is why we love them so much—they are fun, entertaining and are about us.

Source: Storytelling In Drawing | Seeing Things

AI = surveillance

Social networks are surveillance systems. Loyalty cards are surveillance systems. AI language models are surveillance systems.

We live in a panopticon.

Why is it that so many companies that rely on monetizing the data of their users seem to be extremely hot on AI? If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.”

Onstage at TechCrunch Disrupt 2023, Whittaker explained her perspective that AI is largely inseparable from the big data and targeting industry perpetuated by the likes of Google and Meta, as well as less consumer-focused but equally prominent enterprise and defense companies. (Her remarks lightly edited for clarity.)

“It requires the surveillance business model; it’s an exacerbation of what we’ve seen since the late ’90s and the development of surveillance advertising. AI is a way, I think, to entrench and expand the surveillance business model,” she said. “The Venn diagram is a circle.”

“And the use of AI is also surveillant, right?” she continued. “You know, you walk past a facial recognition camera that’s instrumented with pseudo-scientific emotion recognition, and it produces data about you, right or wrong, that says ‘you are happy, you are sad, you have a bad character, you’re a liar, whatever.’ These are ultimately surveillance systems that are being marketed to those who have power over us generally: our employers, governments, border control, etc., to make determinations and predictions that will shape our access to resources and opportunities.”

Source: Signal’s Meredith Whittaker: AI is fundamentally ‘a surveillance technology’ | TechCrunch

Screens, addiction, and parenting

I spent my lunchtime packaging up my beloved PlayStation 5. I’m going to send it to my brother-in-law and his family until my son heads off to university. This directly impacts me and my extra-curricular activities, but I’m at my wits end.

He can’t control his use of it, sadly. Combined with his use of a smartphone, I feel like I’ve failed as a parent despite all of the things I’ve tried. I wrote my doctoral thesis on digital literacies, for goodness sake.

Ben Werdmuller’s at the other end of the spectrum with his son. I wish him the best of luck.

Kid under chair looking at screens
We walk our son to daycare via the local elementary school. This morning, as we wheeled his empty stroller back past the building, a school bus pulled up outside and a stream of eight-year-olds came tumbling out in front of us. As we stood there and watched them walk one by one into the building, I saw iPhone after iPhone after iPhone clutched in chubby little hands. Instagram; YouTube; texting.

It’s obvious that he’ll get into computers early: he’s the son of someone who learned to write code at the same time as writing English and a cognitive scientist who does research for a big FAANG company. Give him half a chance and he’ll already grab someone’s phone or laptop and find modes none of us knew existed — and he’s barely a year old. The only question is how he’ll get into computers.

[…]

He’s entering a very different cultural landscape where computers occupy a very different space. Those early 8-bit machines were, by necessity, all about creation: you often had to type in a BASIC script before you could use any software at all. In contrast, today’s devices are optimized to keep you consuming, and to capture your engagement at all costs. Those iPhones those kids were holding are designed to be addiction machines.

Source: Parenting in the age of the internet | Ben Werdmuller

Conspicuously sesquipedalian communication

Getting people to understand your ideas is a difficult thing. That’s why it’s been so gratifying to work at various times with Bryan Mathers over the last decade. We humans are much better at processing visual inputs than deciphering text.

That being said, as Derek Thompson shows in this article, you have to begin with the realisation that simple is smart. It’s much easier to just write down what’s in your head that do so in a way that’s easy for others to understand.

In some ways, this reminds me of my work on ambiguity, which was a side-product of the work I did on my doctoral thesis. It’s also a good reminder that one of the best uses that most people can make of AI tools such as ChatGPT is to simplify their work.

Shadow of person typing
High school taught me big words. College rewarded me for using big words. Then I graduated and realized that intelligent readers outside the classroom don’t want big words. They want complex ideas made simple.  If you don’t believe it from a journalist, believe it from an academic: “When people feel insecure about their social standing in a group, they are more likely to use jargon in an attempt to be admired and respected,” the Columbia University psychologist Adam Galinsky told me. His study and other research found that when people use complicated language, they tend to come across as low-status or less intelligent. Why? It’s the complexity trap: Complicated language and jargon offer writers the illusion of sophistication, but jargon can send a signal to some readers that the writer is dense or overcompensating. Conspicuously sesquipedalian communication can signal compensatory behavior resulting from suboptimal perspective-taking strategies. What? Exactly; never write like that. Smart people respect simple language not because simple words are easy, but because expressing interesting ideas in small words takes a lot of work.
Source: Why Simple Is Smart | The Atlantic

What people are really using generative AI for

As I’ve written several times before here on Thought Shrapnel, society seems to act as though the giant, monolithic, hugely profitable porn industry just doesn’t… exist? This despite the fact it tends to be a driver of technical innovation. I won’t get into details, but feel free to search for phrases such as ‘teledildonics’.

So this article from the new (and absolutely excellent) 404 Media on a venture capitalist firm’s overview of the emerging generative AI industry shouldn’t come as too much of a surprise. As a society and as an industry, we don’t make progress on policy, ethics, and safety by pretending things aren’t happening.

As the father, seeing this kind of news is more than a little disturbing. And we don’t deal with all of it by burying our head in the sand, shaking our head, or crossing our fingers.

The Andreesen Horowitz (also called a16z) analysis is derived from crude but telling data—internet traffic. Using website traffic tracking company Similarweb, a16z ranks the top 50 generative AI websites on the internet by monthly visits, as of June 2023. This data provides an incomplete picture of what people are doing with AI because it’s not tracking use of popular AI apps like Replika (where people sext with virtual companions) or Telegram chatbots like Forever Companion, which allows users to talk to chatbots trained on the voices of influencers like Amouranth and Caryn Marjorie (who just want to talk about sex).

[…]

What I can tell you without a doubt by looking at this list of the top 50 generative AI websites is that, as has always been the case online and with technology generally, porn is a major driving force in how people use generative AI in their day to day lives.

[…]

Even if we put ethical questions aside, it is absurd that a tech industry kingmaker like a16z can look at this data, write a blog titled “How Are Consumers Using Generative AI?” and not come to the obvious conclusion that people are using it to jerk off. If you are actually interested in the generative AI boom and you are not identifying porn as core use for the technology, you are either not paying attention or intentionally pretending it’s not happening.

Source: 404 Media Generative AI Market Analysis: People Love to Cum

Oh great, another skills passport

I’ve spent the last 12 years working in the ecosystem around Open Badges, which provides an alternative accreditation system. It didn’t come out of thin air, and before this there was plenty of work around e-portfolios. Next up we’ve got Verifiable Credentials which allow for lots of things, including endorsement.

Frustratingly, over the past couple of decades, people several steps removed from actual jobs markets and education systems decide to weigh in. Inevitably, they use the metaphor closest to hand, which tends to be a ‘passport’.

This not only is the wrong metaphor, but it diverts money and attention from fixing some of the real issues in the system. I’d suggest that these are at least threefold:

  1. Taxonomic straightjackets — we don't tend to recognise everything that makes for a valuable employee or colleague. There are behaviours that are valuable, as well esoteric knowledge and skills that don't fit into pre-defined taxonomies.
  2. Hiring is broken — this deserves a whole other blog post, but current systems tend to automate the very things that need a human touch. Hence, applicants spend an inordinate amount of time searching for and applying for jobs, while algorithms reject people who would be a perfectly good fit.
  3. References are outdated — one organisation I used to work for stopped taking references because a) in most jurisdictions, it's against the law to make negative comments, and b) they're generally unreliable. Yet the whole system is predicated on them. Endorsements and recommendations based on network relationships are much more valuable.
I could go on, and probably will over at my personal blog. Or perhaps the Australian government can give me $9.1 million to point them in the right direction.
The passport system is intended to help workers advertise their full range of qualifications, micro-credentials, prior learning, workplace experience and general capabilities.

Businesses, unions, tertiary institutions and students are among those the federal government says will be consulted about the initiative.

Treasurer Jim Chalmers said the goal was to make it easier for employers to find highly-qualified staff and for workers to have their qualifications recognised.

“We want to make it easier for more workers in more industries to adapt and adopt new technology and to grab the opportunities on offer in the defining decade ahead of us,” Chalmers said.

Source: National Skills Passport: Government aims to connect workers and employers | SBS News

If your heart isn’t it, it’s probably because there’s no heart anywhere in the process

One thing I’ve learned spending over a decade thinking about Open Badges and alternative credentials is that hiring is broken. Although there are mitigations and workarounds — some of which I’ve implemented when hiring a team and helping others do so — the whole thing is a dumpster fire.

This article by Paul Fuhr discusses the horror show that is job hunting in the age of platforms such as Indeed. He does a great job of showing how automated and dehumanising the whole hiring process is. Platforms are more focused on user engagement than genuinely aiding job seekers; applicants are reduced to mere data points.

Not only that, but the lack of human-centricity to the whole thing fails to accommodate those with non-linear careers while simultaneously trivialising the job search process. Unsurprisingly, he’s calling for root-and-branch reform of the  current job market. I can’t help but think that badges and alternative credentials can make the whole thing more transparent and fair, moving away from automated metrics.

I’ve applied for (quite literally) thousands of jobs. Very quickly, I went from being surgically precise about job applications to taking a shotgun-blast approach to it all, spraying applications out in every direction. I’ve clicked the “Submit” button on countless career sites. I’ve created four different versions of my resume. I’ve spent more time on LinkedIn than any other site, too, though I suspect Reddit is happy to have some server bandwidth back.

Searching for a steady job is a disheartening and depressingly tedious affair, but it doesn’t have to be. If I’m qualified for anything at the moment, though, it’s being qualified to weigh in on the contemporary job-search experience. I know what it is, what it isn’t, what it pretends to be, why it no longer works, and what needs to change. And thanks to a year-plus of trying to find consistent work, it’s no longer about connecting me with the job of my dreams — it’s about connecting me with my dream of simply having a job.

[…]

Machine learning, AI, automation, yadda yadda yadda. I get it. I understand the “why” of automating the hiring process; I even think it can be a helpful (jargon alert) “arrow in the quiver” for HR. I can’t even imagine a single HR specialist being tasked to locate the right candidate from a huge field of applicants for one job, let alone fifteen jobs at once. That’s like finding a needle in a stack of needles. It’d be paralyzing.

That said, hiring managers and job seekers have arrived at a truly dangerous intersection. Employers have allowed automation to creep in and govern so much of the HR process that it threatens to ignore the whole…well, you know, human part of it all. And some companies insist on doubling-down on this façade; I’ve visited a shocking number of sites that pretend to have an actual human person ready to chat with you (certainly not a bot!), as if they’re impossibly waiting 24/7 to answer your questions.

We’re at a maddeningly mindless moment when it comes to finding employment, but it’s one that could be repaired with some maddeningly simple ideas. For starters, just bring back some humans. Robots can parse your past and distill you down into data, but they’ll never make a genuine connection or get a sense of you are. Also, simplicity works both ways: it benefits the applicant as much as an HR specialist.

Source: Why Resumes Are Dead & How Indeed.com Keeps Killing the Job Market | Paul Fuhr

A trickle, a ripple, a slow rush

This article by Antonia Malchik reflects on her personal journey moving back to her hometown in Montana. It focuses on her deep sense of gratitude for the natural environment and community. She discusses the annual Gathering of the Glacier-Two Medicine Alliance, celebrating the retirement of the last remaining oil lease in the area, which is significant for the Blackfeet Nation.

The part of the article in which I’m most interested is towards the end: a reflective moment by a creek. She writes about the importance of being present in nature and contemplating one’s place and responsibilities in the world. That feeling of being in and of nature after a day’s walking, feeling quite emotional. It stirs my soul just thinking about it.

On my way home, I stopped at a creek I’m fond of, near a trailhead leading into the Bob Marshall Wilderness. The parking lot was empty of other cars or people. Last year when I’d camped there, the creek had held a delightful number of cylindrical caddisfly shells constructed from gravel about the size of a sesame seed. I looked for them but it was too late in the year.

The creek ran cold across my bare feet, its sound and movement and chilly reminders of snowmelt all I really need in this world to ground myself in what’s real, and what matters. I sat there letting my feet go numb and the sound run through me, September’s late afternoon sunlight filtering through the aspen trees to glance off the water.

I don’t even know what to call that sound—a trickle, a ripple, a slow rush?

Sometimes the right answer is an action. Sometimes it’s a change in policy, or in culture. And sometimes it’s simply being, sitting there by a creek reminding yourself what it feels like to be alive, in a place you love. It’s asking questions of belonging and responsibility, and struggling with your own place in the world.

That sound is all of life to me. I could have sat there forever, grown cold and hungry, but I never for a moment would have felt alone.

Source: Sometimes there’s a right answer, sometimes you sit by a creek, and sometimes they’re the same thing | On The Commons

If LLMs are puppets, who's pulling the strings?

The article from the Mozilla Foundation surfaces into the human decisions that shape generative AI. It highlights the ethical and regulatory implications of these decisions, such as data sourcing, model objectives, and the treatment of data workers.

What gets me about all of this is the ‘black box’ nature of it. Ideally, for example, I want it to be super-easy to train an LLM on a defined corpus of data — such as all Thought Shrapnel posts. Asking questions of that dataset would be really useful, as would an emergent taxonomy.

Generative AI products can only be trustworthy if their entire production process is conducted in a trustworthy manner. Considering how pre-trained models are meant to be fine-tuned for various end products, and how many pre-trained models rely on the same data sources, it’s helpful to understand the production of generative AI products in terms of infrastructure. As media studies scholar Luke Munn put it, infrastructures “privilege certain logics and then operationalize them”. They make certain actions and modes of thinking possible ahead of others. The decisions of the creators of pre-training datasets have downstream effects on what LLMs are good or bad at, just as the training of the reward model directly affects the fine-tuned end product.

Therefore, questions of accountability and regulation need to take both phases seriously and employ different approaches for each phase. To further engage in discussion about these questions, we are conducting a study about the decisions and values that shape the data used for pre-training: Who are the creators of popular pre-training datasets, and what values guide their work? Why and how did they create these datasets? What decisions guided the filtering of that data? We will focus on the experiences and objectives of builders of the technology rather than the technology itself with interviews and an analysis of public statements. Stay tuned!

Source: The human decisions that shape generative AI: Who is accountable for what? | Mozilla Foundation

Bad historical maps

Like the author of this article, I love a good map. Whether it’s trekking across hills and mountains with an OS map, or looking through historical maps, there’s something enchanting about understanding territories.

The thing is, though, that maps are literally projections. They leave things out and therefore have to be interpreted. If the maps are out of date, or are being used in a way that’s anachronistic, that leads to a huge problem.

As a History teacher, I used to teach WWI but didn’t know that General von Schlieffen, the Chief of the German General Staff, was obsessed with Hannibal and the Battle of Cannae. Apparently he used stories and maps of how it played out to inform his strategy. The problem was that, not only did it happen a couple of millennia beforehand, but it probably didn’t even play out that way.

Maps like this are a big part of why I became a historian. I probably spent more time looking through the volumes of Colin McEvedy’s Penguin Atlas of History series than any other book when I was a kid.... There’s something beguiling about the thought that a simple arrangement of lines might explain the world — like seeing human history as an enormous game of Civilization 6. But of course, that’s also the problem with using maps as a way of understanding history. If you’re not careful, they go from being helpful tools to misleading simplifications.

[…]

In her book The Guns of August, Barbara Tuchman argues that the memory of Cannae, which was passed down through a succession of military histories until it became a virtual obsession of strategists in the 19th century, helped push the world into an unimaginable catastrophe.

It did so by offering up a model of a “battle of annihilation” that Germany’s war planners believed they could unleash on France. At the head of these planers was General von Schlieffen, the Chief of the German General Staff. The map of Cannae haunted Schlieffen’s dreams.

[…]

Cannae was no vague inspiration. It was a direct model for Germany’s invasion of Belgium and France.

[…]

As the historian Martin Samuels pointed out in his article “the Reality of Cannae,” there is no archaeological evidence for the battle. Nor are there first-hand sources of any kind. Everything we know derives from accounts written sixty years or more after Cannae itself. Suffice to say, when Samuels dug into these sources, he found as many questions as answers. The detailed maps of movements at Cannae that decorated military strategy manuals for hundreds of years, in other words, were largely fanciful. Samuels calls Cannae “the most quoted and least understood battle” in history.

The simplicity of a historical map — the clear labels, the sharp edges, and above all the reduction of thousands or millions of people into abstract symbols — is a big part of why they’re so beguiling. But it’s also why they lead us astray.

[…]

It is sometimes said that the map is not the territory. The map is not the historical argument, either.

Instead, maps are a great way to pose questions about history. They are best approached as a way in: an entry-point rather than an ending. They offer one path toward confronting the enormous complexity of “real” history — the kind made by individual people, on the decidedly imperfect and unmap-like terrain of the world.

Source: Historical maps probably helped cause World War I | Res Obscura

More treasures and secrets from ancient Egypt

Underwater archaeologists have discovered a sunken temple off Egypt’s Mediterranean coast, filled with artefacts related to the god Amun and the goddess Aphrodite. The temple was part of the ancient port city of Thonis-Heracleion, which sank due to a major earthquake and tidal waves.

When we discover the remnants of civilizations buried under the sea and in other places it does make me think about humans in the future discovering what we leave behind. What will they think?

While exploring a canal off the Mediterranean coast of Egypt, underwater archaeologists discovered a sunken temple and a sanctuary brimming with ancient treasures linked to the god Amun and the goddess Aphrodite, respectively.

The temple, which partially collapsed “during a cataclysmic event” during the mid-second century B.C., was originally built for the god Amun; it was so important, pharaohs went to the temple “to receive from the supreme god of the ancient Egyptian pantheon the titles of their power as universal kings,” according to a statement from the European Institute for Underwater Archaeology (IEASM).

[…]

Also at the site divers found underground structures supported by “well-preserved wooden posts and beams” that dated to the fifth century B.C., they wrote in the statement.

[…]

The sanctuary also held a cache of Greek weapons, which could indicate that Greek mercenaries were in the region at one time “defending the access to the Kingdom” at the mouth of the Nile’s westernmost, or Canopic, branch, the researchers said in the statement.

Source: Sunken temple and sanctuary from ancient Egypt found brimming with ‘treasures and secrets’ | Live Science

Death, wrecks, and harsh weather

There was a time, about a decade ago, where although I was based from home, I’d be travelling pretty much every week for work. I was abroad once a month at least.

These days, perhaps with the pandemic as a catalyst, I’m slightly more wary of travelling. It’s probably also a function of age and awareness of how routines affect my body. As an historian, though, I’ve always been amazed by those people who journeyed long distances.

This post by an academic historian of medicine and the body outlines some of the dangers such travellers faced. Pretty amazing, when you think about it.

Unlike today, when it’s entirely possible to have breakfast in London, lunch in Milan and be back at home in time for supper, travel in the early modern period was no easy undertaking. More than this, it was widely acknowledged to be inherently dangerous. What, then, were the perceived risks? Even a brief survey tells us a lot about how travel was regarded in health terms.

First was the risk of accident or death on the journey. In the seventeenth century even relatively short distances on horseback or in a carriage carried dangers. Falls from horses were common, causing injury or even death.

[…]

Travel by sea, even around local coasts, carried its own obvious risks of storm and wreck. So common and widely acknowledged were the vagaries of sea travel that a common reason for making a will in the early modern period was just before embarking on a voyage.

[…]

Once abroad, too travellers were at the mercy of a bevy of dangers, from unfamiliar territories and extreme landscapes to harsh weather and climate, their safety contingent on the quality of their transport and the reliability of their guides.

[…]

Even ‘foreign’ food and drink could be risky. Thomas Tryon’s Miscellania (1696) noted the dangers of ‘intemperance’ and of misjudging the effects of climate upon the body in regard to drinking alchohol [sic]

Source: The Health Risks of Travel in Early-Modern Britain | Dr Alun Withey

Microcast #98 — Endorsement


The introduction to some thoughts on endorsement using Open Badges and Verifiable Credentials within networks of trust.

Show notes


Image: Unsplash

Virtual spaces for learning and collaboration

Today, I’ve been doing a UCL short course. As we were coming back from a break, we were discussing the lack of ‘embodiedness’ in virtual interactions. This reminded me of experiments with different platforms that WAO did during the pandemic.

This post by Alja from Tethix is prompted by a challenge-based learning platform that I took part in last year. They’re focusing on tech ethics (hence the name) and their approach was great. It was just that the tools got in the way to some extent.

I think, after reading this, it’s time to experiment again with some of the tools mentioned in the post. Sometimes you do need a sense of play and feel like you’re connected in ways that go beyond small boxes on a screen.

The Tethix Archipelago emerged from the Challenge Based Learning pilot we did in March last year. For the pilot, we designed a unique collaborative online learning experience in tech ethics and used Mural collaborative whiteboards and visual storytelling to situate the learning journey in a fictional world: the Tethix Archipelago. The Archipelago consists of four islands that emerged from the four essentials skills of the Challenge Based Learning journey: collaboration, exploration, practice, and reflection.

Mural turned out to be a great tool for collaboration and live session guidance, but it didn’t really convey a sense of place. Clicking on a link in a Mural to visit the next leg of your journey just doesn’t feel like traveling, especially when you’re trapped in the same little Zoom box during every live session.

So we started exploring tools that could help us convey a sense of space and discovered Gather and WorkAdventure, among others. These tools offer two-dimensional virtual collaborative spaces where you can walk around a space with your avatar and have proximity-based conversations by using your microphone and camera.

[…]

You might be thinking: this is cute and all, but is this Archipelago all games and play? Well, playfulness is a big part of why we’re experimenting with these game-like worlds; we know that play helps us learn better and can unlock our imagination. But there’s much more to it than just millennial nostalgia for pixel graphics.

As already mentioned, Gather allows us to build a sense of place, both inside rooms and between them. And a sense of place helps with learning and memory encoding. Historical records show ancient Greeks using the method of loci or memory palaces, a technique for improving memory encoding and retrieval, and humans have been developing other mnemonic techniques based on spatial relationship for much longer than that. We’re physical beings, uniquely equipped to understand space, whether physical or represented by pixels.

Source: Welcome to the Tethix Archipelago | Tethix

The Social Media Archipelago

On 1st October, I’ll be transitioning the Thought Shrapnel newsletter to Substack. More about that here. What’s interesting is the ecosystem that’s being created there — including Substack Notes, which is where I came across this post.

I’ve several things to say about this hand-drawn map of the ‘social media archipelago’. First, as the top commenter on the post notes, it’s similar to a classic xkcd cartoon from 2007 and shows how much the landscape has changed.

Second, Chelsea Troy quite rightly points out that we’ve got a Twitter-shaped hole in the internet, which people are filling with either private communities (Slack/Discord), the Fediverse (Mastodon, etc.), or Twitter-like things (Bluesky, etc.)

What I think they’re missing is.. Substack Notes. For someone who loves reading and writing, it’s full of interesting people sharing thoughtful things. You can find my notes here.

To anyone looking to navigate the ongoing perils of social media, it can be a challenging and daunting task. An adventure marked by intense trepidation and foreboding, by fear and doubt. But worry no longer, I have drawn a map.

I present to you, The Social Media Archipelago.

Whether you’re lost among the Musky Mountains or the Dunk Swamps of Twitterland, or in the selfie-obsessed Forest of Mirrors on the Isle of Insta, I hope this chart can be a helpful guide on your journey. Never again be stranded among the bleak deserts of Facebook, no interesting content in sight. Never again be sucked into the maelstrom of the Doomscroll, forever locked among the whirlpools of cheap dopamine hits.

Instead, look toward the lone peak of innocent hopes, reminiscent of the heady days of the early internet, where healthy conversation and good faith debate may yet flourish. Look to the terra novalis, known to the early cartographers as the mythical land of Substackus Notum.

Or in the common tongue — Substack Notes.

(it was a slow day at work ok)

Source: Note by M. E. Rothwell on Substack

This isn't working. Can we talk about that?

Thankfully, there’s no-one calling me back into the office. But this post is about people who are being recalled — as well as those working in sub-optimal remote settings.

This post suggests that the phrase “this isn’t working” should be viewed as an invitation for dialogue rather than a threat, arguing that open communication is crucial for things inadequate in the workplace.

The Future of Work conversation is full of rejected gifts. We've seen bosses throw "this isn't working" back in employees' faces as "entitlement." As "millennials." As "no one wants to work anymore." We've seen employees throw it back in their CEO's face, too, as "outdated." Or "boomers." Or "something something commercial real estate." As far as we can tell, that point-scoring hasn't gotten us any closer to the future we're all trying to build.

We know that everyone’s sick of constantly redesigning the rules of work. There’s this revisionist nostalgia for, in some quarters, 2019. And in others, 2021. We know that some of you have built systems here in 2023 that are working for you, and you would like them to please just stay put for a goddamned minute. We get that. But when someone tells you that those systems aren’t working for them, shouting them down won’t give you the peace and quiet you want.

[…]

By all means, read what other orgs are doing. Maybe there are things you can learn from what Apple does, or Google, or Smuckers. But there’s no shortcut around the conversation. Every sales person, fundraiser, marketer, product leader, and designer will tell you the same thing. You have to talk to people to know if you’re actually reaching them. To know if any of your solutions actually solve the problem.

Source: Sorry I’m getting kicked out of this room | Raw Signal Group

Constructs, meta-constructs, and shared cognitive spaces

Posts like this one by Venkatesh Rao are like catnip to me. He explores the concept of the ‘real world’ as a construct shaped by collective human beliefs and values, arguing that it is, of course, anthropocentric and inherently absurd.

All worlds created by humans, such as fandoms and nationalisms, have more or less consequence in shaping the ‘real world’. In other words, there are constructs which serve to help shape the meta-construct. This, in essence, is a shared cognitive space for humans.

Well worth reading if you’re in the mood to question the nature of reality, explore the power of collective belief, and ponder the transient nature of what we consider to be ‘real’. It’s a complex examination of how human perception shapes the world we live in, and how that world, in turn, shapes us.

Matrix-style vertical green text
Accounting for consciously shared worlds like religions, fandoms, and nationalisms, as well as commonalities that arise from obvious and lazy lines of thought or imitation, there are perhaps a few thousand to tens of thousands of non-trivial distinct inhabited worlds out there. Of these, perhaps a few hundred are significant enough to require accounting for in any analysis. The rest are, at best, butterflies flapping in the chaotic weather-systems of history, hoping to cause hurricanes.

Of the few hundred that are significant, perhaps a couple of dozen matter stronglyand perhaps a dozen matter visiblythe other dozen being comprised of various sorts of black or gray swans lurking in the margins of globally recognized consequentiality.

This then, is the “real” world — the dozen or so worlds that visibly matter in shaping the context of all our lives, with common knowledge of such shaping constituting a non-trivial part of the visibly mattering. The consequentiality of the real world is partly a self-fulfilling prophecy of its own reality. Something that can play the rule of truth. For a while.

[…]

The real world, in other words, is a fragile, unreliable, dubious, borderline incoherent, unsatisfying house of cards destined to die. Yet, while it lives and reigns, it is an all-consuming, all-dominating thing. A thing that can seem extraordinarily real compared to any more fragile, value-based private delusions we may harbor. To the point that we typically refer to it unironically as the real world, to be contrasted with self-indulgent fantasies, and characterize belief in it as pragmatism rather than just a grittier delusion.

Source: This is the New Real World | Venkatesh Rao

Image: Marcus Spiske

Research shows people in most countries are anti-capitalist

I came across this via fellow Sunderland AFC supporter Andrew Curry’s Just Two Things newsletter. We also share similar political views, so I share his delight that this journal article from a right-wing thinktank does the opposite of what they were evidently setting out to achieve.

The article presents findings from a global survey on attitudes towards capitalism, revealing that pro-capitalist views are rare and mostly found in six countries. Bizarrely, and seemingly clutching at straws, the research found anti-capitalist views often correlate with conspiracy thinking and negative attitudes towards the rich.

You’re not paranoid if they’re out to get you, and it’s not a conspiracy if capitalism really does favour the 0.1%.

 

Chart showing capitalist sentiment
In only seven of 34 countries – Poland, the United States, the Czech Republic, Japan, Argentina, South Korea, and Sweden – does a positive attitude towards economic freedom clearly prevail. Including the word ‘capitalism’ reduces this to just six of 34 countries, namely Poland, the United States, the Czech Republic, Japan, Nigeria and South Korea. In most countries, anti-capitalist sentiment dominates.

What is it exactly that bothers people about capitalism? If you look at the survey’s overall conclusions, it is – in this order – primarily the opinion that:

  • capitalism is dominated by the rich, who set the political agenda;
  • capitalism leads to growing inequality;
  • capitalism promotes selfishness and greed; and
  • capitalism leads to monopolies.

Not surprisingly, anti-capitalism is most pronounced among those on the left of the political spectrum and the strongest pro-capitalists are to be found to the right of centre. But while in some countries the formula is ‘the more right-wing, the more supportive of capitalism’, there are more countries in which moderate right-wingers are somewhat more supportive of capitalism than those on the far right of the political spectrum.

Source: Attitudes towards capitalism in 34 countries on five continents | Economic Affairs

What's good for us is also good for the planet

I came across this via Dense Discovery, which is one of a number of additional newsletters to which I would recommend Thought Shrapnel readers subscribe.

In this article, Erin Remblance shows how modern lifestyles, particularly in wealthy nations, have led to a loss of human connection and an increase in mental health issues. She suggests that the shift from community-oriented activities to individualistic, consumer-driven behaviour has not only harmed our well-being but also contributed to the climate emergency.

The solution? Returning to simpler, more sustainable ways of living that focus on human connection and creativity. By becoming creators rather than mere consumers we can improve our mental health and simultaneously benefit the planet.

One of the top 5 regrets of the dying is that they wished that they hadn’t worked so hard. Another is that they wished they’d been brave enough to pursue the life they’d really dreamed of, without worrying about what others thought; that they’d had the courage to do the things that made them truly happy. Which is ironic, really, because according to the 18th century economist and philosopher, Adam Smith, wealth is something that is “desired, not for the material satisfaction that it brings, but because it is desired by others”. People are getting to the end of their lives regretting that they worked so hard – often to accumulate wealth so that others could envy it – wishing that instead they had pursued things that truly made them happy regardless of what people thought. What a lesson we could learn from these people’s dying realisations.

[…]

Reducing our consumption is of course important for the health of the planet, but what if one way to do this is by becoming producers, or creators, ourselves? Rediscovering what our human-energy – an abundantly available energy we seem to be using increasingly less of – can achieve, something we once innately drew upon, now buried deep within us as fossil-fuelled energy has overtaken our lives. There’s a clear link here to actions that will mitigate climate change: walking, cycling, growing our own food, and other low-tech solutions such as repairing and fostering community that encourages “social connections … rather than fostering the hyper-individualism encouraged by resource-hungry digital devices.”

[…]

We are not supposed to live like this, and it shows. We can see it in the deterioration of mental and physical health of people in so called ‘wealthy’ nations, in the exploitation of people in the Global South, and we can see it in the planetary-wide ecological crisis we face. What if, in trying to heal ourselves, we also begin to heal the planet? Because, in a wonderful turn of events, it would seem that what is good for us, is good for the planet too.

Source: We are not supposed to live like this | Erin Remblance

The Empty Boat

This was cited in something I read last week and I thought it was worth making it easy for me to re-find. There are plenty of philosophers, including Aristotle, who talk about the difference between the way we treat animate and inanimate objects, but none put it so eloquently as Chuang Tzu.

If a man is crossing a river

And an empty boat collides with his own skiff,

Even though he be a bad-tempered man

He will not become very angry.

But if he sees a man in the boat,

He will shout at him to steer clear.

If the shout is not heard, he will shout again,

And yet again, and begin cursing.

And all because there is somebody in the boat.

Yet if the boat were empty.

He would not be shouting, and not angry.

Source: The Empty Boat by Chuang Tzu | Daily Zen

Maybe it makes sense to talk to plants after all

Although I’ve alluded to talking to plants in the title for this post, the interesting thing here is that research shows they can sense vibrations made by insects like caterpillars. This allows the plants to prepare for an attack by producing defensive chemicals.

Some research suggests that plants can even detect the sound of water, which could have implications for sewer systems. The findings open up possibilities for using sound-based interventions in agriculture, such as drones equipped with speakers to warn crops of impending pest attacks.

Plants have been evolving alongside the insects that pollinate them and eat them for hundreds of millions of years. With that in mind, Heidi Appel, a botanist now at the University of Houston, and Reginald Cocroft, an entomologist at the University of Missouri, wondered if plants might be sensitive to the sounds made by the animals with which they most often interact. The researchers recorded the vibrations made by certain species of caterpillar as they chewed on leaves. These vibrations are not powerful enough to produce sound waves in the air. But they are able to travel across leaves and branches, and even to neighbouring plants if their foliage touches.

The researchers then exposed Thale cress—the plant biologist’s version of the laboratory mouse—to the recorded vibrations while no caterpillars were actually present. Later, they put real caterpillars on the plants to see if exposure had led them to prepare for an insect attack. The results were striking. Leaves that had been exposed had significantly higher levels of defensive chemicals like glucosinolates and anthocyanins, making them much harder for the caterpillars to eat. Leaves on control plants that had not been exposed to vibrations showed no such response. Other sorts of vibration—caused by the wind, for instance, or other insects that do not eat leaves—had no effect.

[…]

The research may have practical consequences, too. “Drones armed with speakers and the right audio files could warn crops to act when pests are detected but not yet widespread,” says Dr Cocroft. Unlike chemical pesticides, sound waves leave no toxic residue. With the help of weather forecasts, the system could even be used to prepare crops for cold snaps.

[…]

Farmers monitor the health of their crops by eye. (Mosaic virus, for instance, is so named because of the mottled pattern produced on the leaves of suffering plants.) That can be hard to do properly over an entire field. But if plants are broadcasting auditory indicators of distress, then wiring a field with microphones might help farmers keep an ear out for trouble.

Source: Plants don’t have ears. But they can still detect sound | The Economist

Noise and working from home

I’ve worked from home for the last eleven years. For the last nine years, I’ve lived near the middle of a market town in the north east of England. You wouldn’t believe the amount of noise.

As respondents in this Hacker News thread comment, you kind of get used to it, and also work around particularly loud noise. However, the struggle is real and now that my wife and I both work from home it’s a factor in us moving.

I’d second the opinion of the commenter I’ve quoted below about getting headphones with at least two levels of noise cancellation. I bought some Sony WH-CH720N cans when they started building behind my home office and they’ve been a godsend.

Noise cancelling headphones hanging on a hook
My personal experience as a big-city dweller who has also worked for prolonged stints in suburban and very rural places is that the suburban version of this problem can really be the worst of both worlds.

When I’m vising my parents in the suburbs… its generally quiet, but that one leaf blower 4 doors down or the one garbage truck crawling down the block suddenly becomes the only thing I can focus on. The noises are infrequent and jarring when they occur.

When I’m at home in my city apartment, the background noise is truly constant - it forms a canvas, nothing really jumps out and therefore the level of what it takes to make a distraction is a lot higher.

My practical advice is to explore headphones with passive noise isolation instead of active noise cancelling. The passive isolation is pretty foolproof, even with sudden or extreme changes in background noise content that the active noise cancelling sometimes takes a moment to adjust to (or perhaps try something like working in a coffee shop for an hour to get the other extreme and reset: write emails where distraction is more OK, come home to the relative quiet of the home office for focus time. I’ve also found even a change of scenery can get me into the zone regardless of what is going on environmentally)

Source: Ask HN: How do you deal with never ending noise and distraction WFH? | Hacker News

Image: Pexels

Shrinkflation, sizes, and shaming

I’d be surprised if ‘shrinkflation’ isn’t word of the year for 2023. For those unaware, it’s the reason why prices for some products have stayed the same while their size decreases.

This article is about Carrefour, one of Team Belshaw’s favourite overseas supermarkets. They’ve added warnings on shelves to “shame” brands. The thing is, as this thread from Mario Zechner shows, it’s not as if supermarkets aren’t in the price fixing game. Also, as I wrote about recently, stores are essentially panopticons.

While I’m on the subject, you might be interested in this crowdsourced website which tracks the differences in size of packs of everything from toothpaste to shortbread biscuits.

The French supermarket chain Carrefour has put labels on its shelves this week warning shoppers of “shrinkflation”, the phenomenon where manufacturers reduce pack sizes rather than increase prices.

It has slapped price warnings on products from Lindt chocolates to Lipton iced tea to pressure top consumer goods suppliers Nestlé, PepsiCo and Unilever to tackle the issue in advance of much-anticipated contract talks.

[…]

Carrefour has marked 26 products in its stores in France with the labels, which say: “This product has seen its volume or weight fall and the effective price from the supplier rise.”

For example, Carrefour said a bottle of sugar-free peach-flavoured Lipton iced tea, produced by PepsiCo, shrank to 1.25 litres (0.33 gallon) from 1.5 litres, resulting in a 40% effective increase in the price a litre.

Source: Carrefour puts ‘shrinkflation’ price warnings on food to shame brands | The Guardian

Dark Tech and Project Cybersyn

I read Evgeny Morozov’s book To Save Everything, Click Here a few years ago and found it frustrating. It’s about the “folly of technological solutionism” so, while I agreed with the broad argument, I thought he presented it in an annoying way.

Here, Morozov is interviewed about his podcast The Santiago Boys, which explores into Project Cybersyn, an ambitious project from 1971 to 1973 under Salvador Allende’s Chilean government. The project aimed to use cybernetics to efficiently manage state-owned enterprises but faced various internal and external challenges, including U.S. interference and internal political tensions.

What’s useful in this interview is the discussion of “dark tech,” highlighting the technological vulnerabilities and challenges faced by socialist projects like this. Morozov argues that the legacy of Project Cybersyn offers valuable lessons for contemporary discussions on socialism, technology, and governance. He emphasises the need for technological sovereignty and a nuanced approach to management and planning. So yes, we could learn a thing or two.

Nick Serpe: What was Project Cybersyn?

Evgeny Morozov: Project Cybersyn—short for “cybernetic synergy”—aimed to aid the Chilean state in managing the enterprises being nationalized by the Unidad Popular government. A significant hurdle was the lack of sufficient managerial staff to oversee them. Allende’s opponents, including the U.S. ambassador, were making things even harder by encouraging managers and other professionals to flee the country.

As with most science and technology projects, the path toward Cybersyn was not linear. It didn’t emerge as a culmination of some strategic plan to use computers in management; the whole process was more chaotic—and even its name came at a later stage. It all started with an effort to bring some external expertise to Chile. Fernando Flores, a high-ranking member of the Allende administration, felt that he needed help in dealing with all these nationalized companies. So he sought the guidance of the British management consultant Stafford Beer, even going to London to meet him. That encounter resulted in Beer agreeing to go to Chile. This collaboration eventually blossomed into what we now recognize as Project Cybersyn.

[…]

In the end, Cybersyn was a tragedy—and a drama. This project started in an optimistic, even utopian political environment. The Santiago Boys worked off the assumption that Allende would be allowed to govern, and they would be able to build a different economy in Chile. These assumptions were quite unrealistic. If you know anything about how ITT, the CIA, local industrialists, the government of Brazil, and other forces were trying to prevent Allende from even coming to office, you would never think that such optimism was warranted—especially when Allende won the election with only one-third of the popular vote and relied on a very unstable coalition of six parties.

Source: Liberty Machines and Dark Tech | Dissent Magazine

Good news on Covid treatments

Well this is promising. Researchers have identified a critical weakness in COVID-19 in its reliance on specific human proteins for replication. The virus has an “N protein” which needs human cells to properly package its genome and propagate. Apparently, blocking this interaction could prevent the virus from infecting human cells.

Right now, the most effective treatment for COVID is Paxlovid, which is only effective within three days of infection. This new discovery could lead to medications useful at all stages of infection and potentially pave the way for a new class of antivirals useful against other viruses like flu, RSV, and Ebola.

COVID takes advantage of a human post-translation process called SUMOylation, which directs the virus’ N protein to the right location for packaging its genome after infecting human cells. Once in the right place, the protein can begin putting copies of its genes into new infectious virus particles, invading more of our cells, and making us sicker.

“In the wrong location, the virus cannot infect us,” said Quanqing Zhang, co-author of the new study and manager of the proteomics core laboratory at UCR’s Institute for Integrative Genome Biology.

[…]

This paper shows that COVID depends on SUMOylation proteins, just as the flu does. Blocking access to the human proteins would allow our immune systems to kill the virus.

Currently the most effective treatment for COVID is Paxlovid, which inhibits virus replication. But patients need to take it within three days following infection. “If you take it after that it won’t be so effective,” Liao said. “A new medication based on this discovery would be useful to patients at all stages of infection.”

Source: Scientists uncover COVID’s weakness | UC Riverside News

Navigating the landscape of Digital and Media Literacy 

The report from Tactical Tech focuses on Digital Media Literacy (DML), exploring its complexities and the challenges associated with how it’s assessed. It delves into the role of teachers and educators, hopefully once and for all dismissing the notion of “digital natives” and “digital non-natives”. The report emphasises that both teachers and students bring unique perspectives to digital learning environments: teachers may offer critical thinking skills, while students may be more comfortable with digital tools.

This commissioned research report is a continuation of the Media Literacy Case for Educators project, which was introduced in April 2023 in the article: Media Literacy Case for Educators: Empowering Educators to Lead Media Literacy Initiatives in Europe and referenced in: An Assessment of the Needs of Educators and Youth in Europe for a Digital and Media Literacy Education Intervention.Two phases of exploration have been conducted, involving two rounds of desk research. The result of the second phase can be seen in the annotated bibliography included at the end of this report. Recommendations which resulted from the research include: combine methods and make learning fun; use evaluation and other key elements in the curriculum. Additional observations and considerations which require further exploration include: give attention to teachers’ and educators’ skills; and develop “patchwork blankets” and alliances.
Source: Digital and Media Literacy Education: Navigating an Ever-Evolving Landscape | Tactical Tech

Ducks, prompting, and LLMs

Large Language Models (LLMs) like ChatGPT don’t allow you to get certain information. Think things like how to make a bomb, how to kill people and dispose of the body. Generally stuff that we don’t want at people’s fingertips.

Some things, though, might be prohibited because of commercial reasons rather than moral ones. So it’s important that we know how to theoretically get around such prohibitions.

This website uses the slightly comical example of asking an LLM how to take ducks home from the park. Interestingly, the ‘Hindi ranger step-by-step approach’ yielded the best results. That is to say that prompting it in a different language led to different results than in English.

Language models, whatever. Maybe they can write code or summarize text or regurgitate copyrighted stuff. But… can you take ducks home from the park? If you ask models how to do that, they often refuse to tell you. So I asked six different models in 16 different ways.
Source: Can I take ducks home from the park?

The supermarket is a panopticon

My son’s now old enough to get ‘loyalty cards’ for supermarkets, coffee shops, and places to eat. He thinks this is great: free drinks! money off vouchers! What’s not to like? On a recent car journey, I explained why the only loyalty card I use is the one for the Co-op, and introduced him to the murky world of data brokers.

In this article, Ian Bogost writes in The Atlantic about the extensive data collection by retailers to personalise marketing. This not only predicts but also influences consumer behaviour, raising ethical concerns about the erosion of privacy and democratic ideals. Bogost argues that this data-driven approach shifts the power balance, allowing companies to manipulate consumer preferences.

In marketing, segmentation refers to the process of dividing customers into different groups, in order to make appeals to them based on shared characteristics. Though always somewhat artificial, segments used to correspond with real categories or identities—soccer moms, say, or gamers. Over decades, these segments have become ever smaller and more precise, and now retailers have enough data to create a segment just for you. And not even just for you, but for you right now: They customize marketing messages to unique individuals at distinct moments in time.

You might be thinking, Who cares? If stores can offer the best deals on the most relevant products to me, then let them do it. But you don’t even know which products are relevant anymore. Customizing offerings and prices to ever-smaller segments of customers works; it causes people to alter their shopping behavior to the benefit of the stores and their data-greedy machines. It gives retailers the ability, in other words, to use your private information to separate you from your money. The reason to worry about the erosion of retail privacy isn’t only because stores might discover or reveal your secrets based on the data they collect about you. It’s that they can use that data to influence purchasing so effectively that they’re rewiring your desires.

[…]

Ordinary people may not realize just how much offline information is collected and aggregated by the shopping industry rather than the tech industry. In fact, the two work together to erode our privacy effectively, discreetly, and thoroughly. Data gleaned from brick-and-mortar retailers get combined with data gleaned from online retailers to build ever-more detailed consumer profiles, with the intention of selling more things, online and in person—and to sell ads to sell those things, a process in which those data meet up with all the other information big Tech companies such as Google and Facebook have on you.“Retailing,” Joe Turow told me, “is the place where a lot of tech gets used and monetized.” The tech industry is largely the ad-tech industry. That makes a lot of data retail data. “There are a lot of companies doing horrendous things with your data, and people use them all the time, because they’re not on the public radar.” The supermarket, in other words, is a panopticon just the same as the social network.

Source: You Should Worry About the Data Retailers Collect About You | The Atlantic

Microcast #097 — What do we mean by 'consensus'?


Exploring different conceptions of 'consensus' using polls on the Fediverse and LinkedIn, as well as reflecting on my own experience.

Show notes


Image: Unsplash

Piracy and the art of cultural archiving

Shortly before Daft Punk’s album Discovery was released, I managed to download a version of it which must have been exfiltrated from the studio. It was subtly different to the version that was released and, to be honest, I preferred it. Sadly, I’ve long since lost the MP3s, and the chance of me finding anything other than the official version these days is minimal.

This article is about the preservation of music, movies, and books. What copyright maximalists don’t realise is that piracy is actually amazing at ensuring that cultural diversity flourishes and is preserved. It’s definitely worth a read.

(It’s also interesting to me how this intersects what I posted earlier about AI-generated music and fandom, because both intersect with ‘official’ narratives and our current understanding of copyright.)

"Your local bookseller cannot creep into your home in the middle of the night and reclaim the contents of your bookshelf," the legal scholars Aaron Perzanowski and Jason Schultz observe in their 2016 book The End of Ownership. "But Amazon exercises a very different kind of practical power over your digital library. Your Kindle runs software written by Amazon, and it features a persistent network connection. That means Amazon can send it instructions—to delete a book or even replace it with a new version—without any intervention from you." The potential for mischief was clear as early as 2009, when someone started selling bootleg Kindle editions of George Orwell's 1984 and Amazon reacted by dispatching even some purchased copies to the memory hole.

The fearful mood intensifies whenever politics enters the picture. When books by Agatha Christie, Roald Dahl, and other long-dead authors were reedited to reflect what are said to be “contemporary sensitivities,” many e-books were automatically updated even for readers who had bought them long before. During the George Floyd protests of 2020, several streaming services, unable to stop the abusive policing that set off the unrest, decided instead to edit or eliminate TV episodes where characters appeared in blackface. (This wasn’t an anti-racist gesture so much as a cargo-cult copy of an anti-racist gesture—an elaborate imitation built without figuring out the functions of the component parts—and so it mostly affected shows that had presented blackface with obvious disapproval.) Several songs with words that might offend listeners have gone missing from Spotify or (as with Lizzo’s “Grrrls,” which originally included the term spaz) were replaced with new versions.

Every time news breaks of one of these deletions, a refrain echoes online: Buy physical media! The internet is too impermanent, the argument goes: The real cultural cornucopia was in the outside world.

As is often the case with nostalgia, this leaves out a lot. We still have access to far more media than we did in the days before the mass internet. Yes, this includes that politically controversial material: It takes less than a minute to dig up the unredacted version of “Grrrls” on YouTube (just search for lizzo grrrls spaz), and it’s not hard to find material that was withdrawn from circulation long before the internet era. (I’m told the ’90s were a less politically correct time than today, but back then you needed to track down a bootleg DVD or videotape if you were curious about Song of the South. Now it’s posted on the Internet Archive.) It’s too easy to take the internet’s riches for granted and to forget how much was inaccessible just a few decades ago.

But while we shouldn’t want to return to those pre-web days, there’s something to be said for that online-offline hybrid space where my old tape-trading network dwelled—if not as a world to recreate, then as a way to think about cultural preservation. And there’s something to be said for the bootleggers and pirates. Whether or not they mean to do it, they’re salvaging pieces of our heritage.

Source: Online Outlaws Preserve the History of Music, Movies, and Books | Reason

Greatest films of all time?

I confess to only having watched one of the top 10 films on this list, which is put together mainly be critics and people who work in the film industry. Must rectify that.

On the other hand, I have watched seven of the top 10 films on the IMDB top 250 list…

In 1952, the Sight and Sound team had the novel idea of asking critics to name the greatest films of all time. The tradition became decennial, increasing in size and prestige as the decades passed.

The Sight and Sound poll is now a major bellwether of critical opinion on cinema and this year’s edition (its eighth) is the largest ever, with 1,639 participating critics, programmers, curators, archivists and academics each submitting their top ten ballot. What has risen up the ranks? What has fallen? Has 2012’s winner Vertigo held on to its title? Find out below.

Source: The Greatest Films of All Time | BFI

Fandom and AI generated music

If you haven’t discovered AI-generated songs by your favourite artists, then you’re missing a trick. Try I’m a Barbie Girl by Johnny Cash, for example, or Skyfall by Freddie Mercury. Amazing stuff.

This article is about the fandom around artists such as One Direction and Harry Styles, who are paying hard-earned real money for ‘leaks’ which may or may not be AI-generated music. No-one can tell the difference.

Discord communities within the Harry Styles and One Direction fandom are tearing themselves apart over “leaked snippets” of supposed demo songs that may or may not be AI-generated and are being sold to superfans for hundreds of dollars each.

The controversy has turned into a days-long crowdsourced investigation and communitywide obsession, in which no one is really sure what’s real, what’s fake, whether they’re being scammed, or who or what made the songs that they’re listening to.

Over the last few weeks, a flurry of Harry Styles and One Direction snippets—which are short samples of a track designed to prove legitimacy so people will pay of the full thing—have begun popping up on YouTube, TikTok, and, most importantly, Discord, where they are being sold. The problem is no one can tell which, if any, of the songs are real, including AI-analysis companies who listened to the tracks for 404 Media.

Source: The Specter of AI-Generated ‘Leaked Songs’ Is Tearing the Harry Styles Fandom Apart | 404 Media

Saving the world using a 2x2 matrix

I’m a fan of Venkatesh Rao’s writing, and in this post he explores what we mean by ‘saving’ when we talk about ‘saving the world’. To do this, he uses a 2x2 matrix, categorising people’s motivations along two axes: biological scope and temporal scope. He identifies four types of “worlds” people aim to save: Civilisations, represented by ethnocentrists; Technological Modernity, represented by cosmopolitans; Modern Nations, represented by patriots; and the World as Wildernesses, represented by Gaians.

Rao himself identifies as a “cosmopolitan with Gaian tendencies,” advocating for a world that is rich in both contemporary technological potentialities and natural history. He argues that the focus should not be on saving the world but on “rewilding” it so that it becomes self-sustaining and doesn’t require saving.

Worlds constructed with biologically narrow scopes (which I’d define as somewhere between family/kinship groups to ethnicities and races, with perhaps a few animal species of cultural significance included, but always falling short of including all of humanity, let alone all of the biosphere) have all sorts of analytical problems that makes them intellectually fragile. But my main problem with them is that they are boringly impoverished to the point of deadness. Even if I could, with careful construction, make them “work” as worlds-to-save, and imagine sustainable futures where they are the entirety of the world, I don’t see the point.

But apparently, a significant portion of humanity disagrees with me on this front. Many are attracted to the idea that their world-to-save can expand to become all that is; replacing a messy, illegible pluralism with a gloriously insipid and legible monoculture that reigns supreme with a firm, dead hand. I suspect the very intellectual fragility of these worlds is part of their appeal, much as fragility is part of the appeal of house-of-cards games.

[…]

For a cosmopolitan with Gaian tendencies, to save the modern world is to rewild and grow the global web of already slightly wild technological capabilities. Along with all the knowledge and resources — globally distributed in ways that cannot be cleanly factored across nations, civilizations, and other collective narcissisms — that is required to drive that web sustainably. And in the process, perhaps letting notions of civilization — including wishful notions of regulating and governing technology in “human centric” ways — fall by the wayside if they lack the vitality and imagination to accommodate technological modernity.

Source: What we seek to save when we seek to save the world | Ribbonfarm

The complexities of distraction

I really enjoyed this essay by David Schurman Wallace in The Paris Review about being distracted while writing. It reminded me of a much shorter version of one my favourite books about writing: Out of Sheer Rage by Geoff Dyer.

Wallace delves into the complexities of distraction, using Gustave Flaubert’s unfinished novel Bouvard and Pécuchet as a lens to explore how our pursuits, whether intellectual or mundane, often become a chain of distractions. He argues that distraction isn’t necessarily a negative state but could be an essential part of the human condition, a byproduct of our ceaseless quest for knowledge and meaning.

I began writing this essay while putting off writing another one. My apartment is full of books I haven’t read, and others I read so long ago that I barely remember what’s in them. When I’m writing something, I’m often tempted to pick one up that has nothing to do with my subject. I’ve always wanted to read this, I think, idly flipping through, my eyes fixing on a stray phrase or two. Maybe it will give me a new idea.

In this moment of mild delusion, I’m distracted. I’ve always wanted to write an essay about distraction, I think. Add it to the laundry list of incomplete ideas I continue to nurse because some part of me suspects they will never come to fruition, and so will never have to be endured by readers. These are things you can keep in the drawer of your mind, glittering with unrealized potential. In the top row of my bedroom bookshelf is a copy of Flaubert’s final novel, Bouvard and Pécuchet. Something about it seems appropriate, though I’m not sure exactly what. I pluck it down.

Source: In This Essay I Will: On Distraction | The Paris Review

Developing your niche

The website of the guy behind this post is a bit too heavy on the self-marketing for my liking, but I did like the diagram in this post about developing rather than ‘finding’ your niche.

The diagram is contrasted with the kind of Ikigai approach you usually see which, he points out, doesn’t tell you where to actually start.

First, you need to take a courageous leap.

You need to ignore the negative voices of self-doubt, and you need to ignore feelings of “imposter syndrome.”

Next, you need to begin exploring the odd thing(s) you find fascinating. I call this phase, the “Zone of Fascination.”

Next, you must find a congregation of people who share your irrational fascination. For myself, I found this in r/Zettelkasten.

After this, you need to exit the congregation and go deeper than anyone else in a specific area. You must undergo three challenges. Think of these as “quests” in a hero’s journey.

Source: It’s Not about Finding Your Niche, It’s about Developing Your Niche | Scott P. Scheper

Monday morning feeling

This is definitely a mood.

Cat cartoon with words: "The stupidity of the masses is only equalled by their tolerance for grotesque banality"
Just a cat pondering the meaning of life.
Source: Who the Hell Are You? | The New Yorker

The burnout curve

I stumbled across this on LinkedIn. There doesn't seem to be an authoritative source yet other than the author's (Nick Petrie) social media posts, which is a shame. So I'm quoting most of it here so I can find and refer to it in future.

In terms of my own experience, I slid down that slope pretty quickly in my teaching career, and definitely experienced the 'trap' of going back into a similar situation in a different school. It was also toxic as I had been promoted quickly and, looking back, probably beyond my abilities and experience at the time.

But the great thing about this graphic is that it shows that it's possible to dig your way out, as I did, by realising that a different path is possible. It hasn't always been plain sailing, and there have been other, lesser, traumas since. But I've definitely grown from my earlier experiences, and this is a handy chart to show people who are near the bottom of the curve.

When we interviewed people who had burned out, they told us remarkably similar stories. 

1. A relentless work ethic – they had a set of beliefs and stories that drove them to work hard - I will deliver, I won’t let people down, I must give 100% at all times.

2. Bottomless workload – they joined organizations that rewarded their work ethic with endless work. The harder they worked the more they were given.

3. Sliding into burnout – thoughts of work became constant. They had trouble switching off in the evenings, work was taking over their life.

4. Ignoring the warning signs – their body was sending signals that something needed to change. They were tense, irritated, exhausted. But they couldn’t slow down – there was too much to do.

5. The breakdown – For those who would not listen, the body and brain had a last resort. They shut down. People couldn’t get out of bed, couldn’t drive, couldn’t read. The body refused to go on. 

The trap for many people is the belief that rest is the solution. So, they took a break – a week, a month or a year. They then went back to the same work, with the same mindset and the same behaviors. They got the same result. 

The people we interviewed who genuinely overcame burnout followed a common path. 

6. Meeting friends and mentors – they realized they couldn’t repeat their past. They needed new perspectives and a new approach. They got these from family, peers, coaches, therapists and support groups.

7. Deep reflection – they came off autopilot for the first time in years. They reflected deeply on the past – what caused me to burnout? What was driving me? Then the future – what sort of work and life do I want going forward? How can I move myself towards this vision?

8. Taking action – they took new actions, sometimes big – change of job, change of career - sometimes small - they set new boundaries, restarted a hobby, got a therapist. Some things helped, some things did not. It didn’t matter. The key thing was they were doing NEW things. They were not repeating their old habits. New actions led to new insights and habits.

9. Post traumatic growth – when we interviewed people who took this path 2 years after their burnout, the most surprising thing was how much they had grown from the experience.  

Source: Nick Petrie | LinkedIn

Status detection systems

I listened to a fascinating episode of the You Are Not So Smart podcast while out running over the weekend. The focus was on a new book by Will Storr called The Status Game and he was full of insights.

  • Some of the most important takeaways for me were around:
  • There are three main status games: dominance, virtue, and success
  • Status games are hard wired into us, and we're essentially just 'status detection systems'
  • Social media sites such as Twitter make it easy to signal virtue, whereas those such as Instagram make it easy to signal success.
  • Trying to force people to play your type of status game is about dominance.
  • Status games are why teenagers, who are new to these games, get embarrassed easily and take lots of risks.
  • The types of status games available to you often depend on your socio-economic status. This explains honour killings, quests for dominance, etc.
YANSS logo
In this episode we welcome back author Will Storr whose new book, The Status Game, feels like required reading for anyone confused, curious, or worried about how politics, cults, conspiracy theories communities, social media, religious fundamentalism, polarization, and extremism are affecting us – everywhere, on and offline, across cultures, and across the world.

What is The Status Game? It’s our primate propensity to perpetually pursue points that will provide a higher level of regard among the people who can (if we provoked such a response) take those points away. And deeper still, it’s the propensity to, once we find a group of people who regularly give us those points, care about what they think more than just about anything else.

In the interview, we discuss our inescapable obsession with reputation and why we are deeply motivated to avoid losing this game through the fear of shame, ostracism, embarrassment, and humiliation while also deeply motivated to win this game by earning what will provide pride, fame, adoration, respect, and status.

Source: The game we can’t escape, the psychology behind our perpetual drive to pursue status | You Are Not So Smart

The punishment for being authentic is becoming someone else’s content

This short piece by Drew Austin reminds me of a couple of links I posted yesterday about Non-places and TikTok’s effect on migration. There are so many quotable parts, including that when it comes to social media, “the only place left to go is outside”.

What I think is interesting is how online and offline used to be seen as completely separate. Then we realised the impact that offline life had on online life, and now we’re seeing the reverse: Instagram, TikTok, etc. having a huge impact on the spaces in which we exist offline.

“In the next few years,” Kyle Chayka tweeted yesterday, “the last desperate search for shreds of authentic local culture will convulse the globe as the internet consumes every interesting quirk and scales it up to the size of TikTok.” That all-too-plausible prediction fits well alongside Chayka’s concept of AirSpace and his observations about overtourism, each examining how social media has come to shape the physical world (or at least vent its noxious exhaust there) instead of merely reflecting it. If AirSpace represents the homogenizing tendency of globally scaled algorithmic platforms like Instagram and Airbnb, which herd everything they touch into aesthetic alignment, then TikTok’s impact seems like the opposite: the cultivation and amplification of difference by a desperate horde of content creators scouring the ends of the earth for new material. The latter ultimately has the same entropic effect as the former, reframing local nuances as temporary viral microtrends that diffuse through culture, form the basis for a thinkpiece or two, and then recede back to their original modest scale. This may be ephemeral but it is pervasive and ongoing. In the contemporary landscape, the punishment for being authentic is becoming someone else’s content.

[…]

The illusion that the internet and “real life” are two separate universes has been thoroughly dispelled by now, but the nature of their interaction is complex and evolving. The social media era seems to have already peaked, as I predicted at the end of last year, calling our present moment a “saturation point of cultural self-consciousness that represents the fullest possible synthesis of reality and our digitally mediated perception of it.” The metaverse concept was dead on arrival; there’s nowhere left to go but outside. And that’s what we’re doing: TikTok is the social network for the internet’s decadent era, embodying the worldview that becoming viral content is the highest calling, the end state to which everything aspires and strives. You visit Italy not to enjoy yourself but to help Italy fulfill its destiny as a meme.

Source: I’m Beginning to See the Light | Kneeling Bus

Job crafting, identity, and fulfilment

This article by Lan Nguyen Chaplin, a professor of marketing at a prestigious business school, reflects my own experience. Those jobs I’ve thought were ‘big’ and ‘important’ have been the ones that have drained me of energy, made me sad, and generally changed me for the worse.

Instead, as Chaplin says, the important thing is to align your work your values and personal strengths. This (eventually) allows you to transform what you do into a sustainable, balanced, and purposeful career. Sometimes, though, you have to know what you’re willing to tolerate and what you’re not, which can involve going precipitously close to the fire.

Outside of my fancy new title, I had begun to feel empty. In just a few months, my identity had quite literally become “my job” and I lost sight of the many things that fulfilled me outside of it. I didn’t have time or energy for family and friends. Activities that brought me joy, like running and lacrosse, went out the window. I traveled for work instead of pleasure. I had no time to give back to my community.

Instead, I jotted down research ideas on bar napkins, replied to emails when everyone else was offline, and had a growing portfolio of projects in development. I didn’t know how to disconnect without feeling unproductive. For hours, I sat with my laptop in isolation, working on research that might never be published.

[…]

The moment you have found your dream job is the moment you have stopped growing, evolving, and finding new ways to experience joy in your role. Remember, you were hired because you offer something the organization is missing. They need change. They need you to bring your whole self to work, and that means doing things differently with the added flare that is you. A job that inspires you and gives you the space you need to be your full self is the dreamiest job out there.

Source: What You Should Chase Instead of a Dream Job | Ascend

AI writing detectors don’t work

If you understand how LLMs such as ChatGPT work then it’s pretty obvious that there’s no way “it” can “know” anything. This includes being able to spot LLM-generated text.

This article discusses OpenAI’s recent admission that AI writing detectors are ineffective, often yielding false positives and failing to reliably distinguish between human and AI-generated content. They advise against the use of automated AI detection tools, something that educational institutions will inevitably ignore.

In a section of the FAQ titled "Do AI detectors work?", OpenAI writes, "In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content."

[…]

OpenAI’s new FAQ also addresses another big misconception, which is that ChatGPT itself can know whether text is AI-written or not. OpenAI writes, “Additionally, ChatGPT has no ‘knowledge’ of what content could be AI-generated. It will sometimes make up responses to questions like ‘did you write this [essay]?’ or ‘could this have been written by AI?’ These responses are random and have no basis in fact.”

[…]

As the technology stands today, it’s safest to avoid automated AI detection tools completely. “As of now, AI writing is undetectable and likely to remain so,” frequent AI analyst and Wharton professor Ethan Mollick told Ars in July. “AI detectors have high false positive rates, and they should not be used as a result."

Source: OpenAI confirms that AI writing detectors don’t work | Ars Technica

Microcast #096 — Getting back in the saddle


Explaining what I've been up to and the difference between being a hedgehog and a fox.

Show notes


Image: Pexels

Non-places

I’m a big fan of Guy Debord’s work but have never read that of Marc Augé, who came up with the concept of ‘non-places’. These are spaces like airports and hotels where human interaction is largely transactional, leading to a form of psychic isolation.

This article outlines Augé argument that to counteract the dehumanising effects of these non-places, individuals should engage in active observation, storytelling, and even talking to strangers. In this way we essentially become ‘supermodern flaneurs’, restoring a social element to these otherwise isolating spaces.

Augé was keen to explain how the transformations of the contemporary world had made anthropology more complicated. “We are in an era characterized by changes of scale,” he wrote. “Images of all sorts, relayed by satellites and caught by the aerials that bristle on the roofs of our remotest hamlets, can give us an instant, sometimes simultaneous vision of an event taking place on the other side of the planet.” So far, so postmodern; what Augé called the “acceleration” of events is an idea that crops up repeatedly in political theory. The specifically supermodern condition that he identifies is the dominance of non-places.

We live in “a world where people are born in the clinic and die in hospital, where transit points and temporary abodes are proliferating under luxurious or inhuman conditions” — from “hotel chains and squats, holiday clubs and refugee camps”. In non-places, exchange between human beings is transactional: you buy a sandwich, or a massage, or a train ticket. Speech is replaced by text; signs direct behaviour, instead of people, giving instructions and advertising products. We are, therefore, isolated, in “a solitude made all the more baffling by the fact that it echoes millions of others”.

Source: Non-places are robbing us of life | UnHerd

TikTok's algorithm and its effect on migration

Kukes, Albania is one of the poorest cities in Europe. Since the end of pandemic lockdowns, the city has seen a sharp rise in the number of Albanians, especially men, seeking asylum in the UK.

This article is a real eye-opener as to what’s going on, and talks about everything from TikTok propaganda to criminal gangs and cannabis houses. Well worth a read, especially to show the other side of the “small boats” rhetoric.

TikTok videos showing fast cars
In 2022, the number of people leaving Albania for the U.K. ticked up dramatically, as well as the number of those seeking asylum, at around 16,000, more than triple the previous year. According to the Migration Observatory at the University of Oxford, one reason for the uptick in claims may be that Albanians who lack proper immigration status are more likely to be identified, leading them to claim asylum in order to delay being deported. But Albanians claiming asylum are also often victims of blood feuds — long-standing disputes between communities, often resulting in cycles of revenge — and viciously exploitative trafficking networks that threaten them and their families if they return to Albania.

By 2022, Albanian criminal gangs in Britain were in control of the country’s illegal marijuana-growing trade, taking over from Vietnamese gangs who had previously dominated the market. The U.K.’s lockdown — with its quiet streets and newly empty businesses and buildings — likely created the perfect conditions for setting up new cannabis farms all over the country. During lockdown, these gangs expanded production and needed an ever-growing labor force to tend the plants — growing them under high-wattage lamps, watering them and treating them with chemicals and fertilizers. So they started recruiting.

Everyone in Kukes remembers it: The price of passage from Albania to the U.K. on a truck or small boat suddenly dropped when Covid-19 restrictions began to ease. Before the pandemic, smugglers typically charged 18,000 pounds (around $22,800) to take Albanians across the channel. But last year, posts started popping up on TikTok advertising knock-down prices to Britain starting at around 4,000 pounds (around $5,000).

People in Kukes told me that even if they weren’t interested in being smuggled abroad, TikTok’s algorithm would feed them smuggling content — so while they were watching other unrelated videos, suddenly an anonymous post advertising cheap passage to the U.K. would appear on their “For You” feed.

Source: The Albanian town that TikTok emptied | Coda Story

Walking 1,000 miles across Europe

As I know from personal experience, walking a long way by yourself is hard work, both mentally and physically. As this article points out, doing so as a woman is even harder, so good on Lea Page for not only walking a thousand miles across Europe, but writing about how the biggest danger is… men.

When I walk alone, the consequences of every good or bad choice I make fall entirely on me: a responsibility and a freedom. As a woman and a mother, I rarely only have to consider what I want and need without having to first attend to other people. I know there are risks, but each time I come out of that “forest”, I feel stronger and more confident. Weighed against the simple daily rhythms of a long-distance walk and the joy and wonder I experience, risk – reasonable risk – becomes a small part of the equation, and one I am willing to accept.

[…]

One other time, while walking along a river just outside Colle di Val D’Elsa in Tuscany, I felt that familiar clench of panic. This river, a glacial robin’s-egg blue, meandered and tumbled gently. The gorge was not deep, but a lonely wooded path just outside a city struck me as the perfect place for an ambush. Clearly, men exist everywhere, so it made no sense to be frightened in that one particular place. I knew that, statistically, women are safer out in the world than they are at home, but in that moment, knowledge felt like thin protection. Unable to shake my feelings of dread, I called my husband, and we talked about inconsequential things so I could hear his sleepy voice and keep putting one foot in front of another.

And that’s what women are really talking about when we talk about being afraid. We are talking about men. But there is, I learned, a difference between being afraid and being unsafe.

Source: I walked 1,000 miles alone through Europe – and learned that fear is the price of freedom | The Guardian

Cooling down is hotting up

As the world heats up, humans are going to need to cool down. The use of air conditioning already accounts for nearly 20% of electricity used in buildings worldwide, so this report highlights the urgent need for higher efficiency standards in cooling technologies to mitigate the strain on energy systems and reduce emissions.

Apparently, effective policies could halve future energy demand and cut costs by $3 trillion by 2050. More importantly than the financial impact, I guess, more efficient air conditioning means that less hot air is dumped into urban environment, which tends to create heat islands (and affects weather patterns).

Chart showing projected rise in demand for air conditioning

Cooling down is catching on. As incomes rise and populations grow, especially in the world’s hotter regions, the use of air conditioners is becoming increasingly common. In fact, the use of air conditioners and electric fans already accounts for about a fifth of the total electricity in buildings around the world – or 10% of all global electricity consumption.

Over the next three decades, the use of ACs is set to soar, becoming one of the top drivers of global electricity demand. A new analysis by the International Energy Agency shows how new standards can help the world avoid facing such a “cold crunch” by helping improve efficiency while also staying cool.

Source: The Future of Cooling | IEA

Indigenous knowledge, sustainable design, and long-term thinking

This is a perfect example of the kind of sustainable design and long-term thinking we lose when we ignore indigenous knowledge.

In Western Australia, Marri trees, known as Gnaama Boorna in the Menang language, have been pruned by Aboriginal people for generations to collect and store rainwater. The ancient practice involves shaping the trees into a bowl-like structure, making them vital water sources in areas where water is scarce, and they are often found in ceremonial areas or high hills.

Gnaama, meaning hole for water, and Boorna, meaning tree or timber.

[...]

By pruning and trimming parts of specific trees as they grow, traditional owners encourage them to take on a unique, bowl-like shape — helping collect and store rain water.

Source: Specially pruned for centuries in WA, marri trees provide a vital source of water for traditional owners | ABC News

Some advice for readers

Less ‘rules’ than notes, this post by Ryan Holiday (himself a prolific author) is worth reading. I like the part where he turns the tables, “You say you don’t have time to read but what does the screen time app on your phone say? What does your calendar say?"

Along the way, Holiday emphasises the importance of a systematic approach over speed reading, advocates for physical books, note-taking, and seeking wisdom rather than mere facts. He also encourages readers to share impactful books with others. (You can check out my reading list and reviews here.)

So the question I am asked most often is:

How do you read so much? What’s the secret?

The answer is not “I’m a speedreader.” As I’ve written before, speed reading is a scam. The answer is that I have a system, a process that helps me be a productive reader. It’s not my system exactly, as I’ve taken many strategies from history’s greatest readers. Nor is this a system designed around speed or quantity. Reading is wonderful in and of itself, why would I try to rush through it? No, I try to do it well. I try to enjoy it.

Source: These 38 Reading Rules Changed My Life | RyanHoliday.net

Generative AI, misinformation, and content authenticity

As a philosopher, historian, and educator by training, and a technologist by profession, this initiative really hits my sweet spot. The image below shows how, even before AI and digital technologies, altering the public record through manipulating photographs was possible.

Now, of course, spreading misinformation and disinformation is so much easier, especially on social networks. This series of posts from the Content Authenticity Initiative outlines ways in which the technology they are developing can be prove whether or not an image has been altered.

Of course, unless verification is built into social networks, this is only likely to be useful to journalists and in a court of law. After all, people tend to reshare whatever chimes with their worldview.

Although it varies in form and creation, generative AI content (a.k.a. deepfakes) refers to images, audio, or video that has been automatically synthesized by an AI-based system. Deepfakes are the latest in a long line of techniques used to manipulate reality — from Stalin's darkroom to Photoshop to classic computer-generated renderings. However, their introduction poses new opportunities and risks now that everyone has access to what was historically the purview of a small number of sophisticated organizations.

Even in these early days of the AI revolution, we are seeing stunning advances in generative AI. The technology can create a realistic photo from a simple text prompt, clone a person’s voice from a few minutes of an audio recording, and insert a person into a video to make them appear to be doing whatever the creator desires. We are also seeing real harms from this content in the form of non-consensual sexual imagery, small- to large-scale fraud, and disinformation campaigns.

Building on our earlier research in digital media forensics techniques, over the past few years my research group and I have turned our attention to this new breed of digital fakery. All our authentication techniques work in the absence of digital watermarks or signatures. Instead, they model the path of light through the entire image-creation process and quantify physical, geometric, and statistical regularities in images that are disrupted by the creation of a fake.

Source: From the darkroom to generative AI | Content Authenticity Initiative

On the need to measure productivity

I’ve long said that no-one really knows what knowledge work looks like. It’s easy to see whether or not someone is digging a hole in the ground, but it’s much more difficult to see whether the work that someone is doing on a computer is ‘productive’.

This is, I think, partly because ‘productivity’ is something that is best thought about for things that can be systematised and made routine. A lot of knowledge work is fundamentally creative, and so quantitative metrics are meaningless. Who cares if you’ve made a million pull requests if they’re all to change a single character?

This article discusses the complexities of assessing productivity in various fields, the issues with current interviewing processes, and suggests that future evaluations may become more tied to tangible accomplishments rather than arbitrary metrics. That’s presupposing, of course, that hierarchical evaluations are even necessary.

[E]very potential metric we devise appears woefully inadequate in assessing this holistic outcome. Whether it's pull requests, lines of code, user stories, story points, or ship dates, it seems that every metric can be manipulated or gamed. Ship dates may be advanced, but quality suffers; story points morph in size depending on the project, and lines of code can be bulked up with a test suite. Even pull requests can be sliced and diced to skew the numbers. It's a frustrating conundrum.

For more fuzzy fields, like product management or marketing or design, it becomes even more hand-wavey. Some fields tend to depend on getting other roles to execute better, but you can’t go rewind history and try things with a different PM to see if things would have been better. Same with design.

[…]

If you give the most productive employee more work, presumably they’d be justified in asking for higher compensation? After all, they are driving greater outcomes for you. Would you be comfortable paying it?

For example, would you pay a 3x more productive designer 3x the fully loaded cost of the average designer? If 10x engineers truly exist, why do pay scales intra company not cover a 10x spectrum?

[…]

My suspicion is that, like in other fields where performance matters and is financially rewarded, there will be a surge in our capacity to measure and evaluate real-life work performance. Compensation will become more closely tied to tangible accomplishments rather than arbitrary levels or seniority. Interviews will transition to be more real-world scenarios, perhaps within the customer’s actual codebase, addressing a genuine problem the customer faces—possibly even compensating the interviewee for their time.

Source: Why is it so hard to measure productivity? | fractional.work

Image: Kelly Sikkema

The declining relevance of Google search

I can’t remember the last time I searched Google. It’s been around six years since I used DuckDuckGo as my main search engine. Which is weird, because people use ‘google’ for searching the web as they do ‘hoover’ for vacuuming cleaning.

This article explores Google’s history and its impact on SEO, content creation. it’s written by Ryan Broderick, author of Garbage Day, a newsletter to which I subscribe. He charts the rise of alternative platforms like Meta’s Facebook, Instagram, and TikTok, and suggests that Google’s era of influence may be waning.

There is a growing chorus of complaints that Google is not as accurate, as competent, as dedicated to search as it once was. The rise of massive closed algorithmic social networks like Meta’s Facebook and Instagram began eating the web in the 2010s. More recently, there’s been a shift to entertainment-based video feeds like TikTok — which is now being used as a primary search engine by a new generation of internet users.

For two decades, Google Search was the largely invisible force that determined the ebb and flow of online content. Now, for the first time since Google’s launch, a world without it at the center actually seems possible. We’re clearly at the end of one era and at the threshold of another. But to understand where we’re headed, we have to look back at how it all started.

[…]

Twenty-five years ago, at the dawn of a different internet age, another search engine began to struggle with similar issues. It was considered the top of the heap, praised for its sophisticated technology, and then suddenly faced an existential threat. A young company created a new way of finding content.

Instead of trying to make its core product better, fixing the issues its users had, the company, instead, became more of a portal, weighted down by bloated services that worked less and less well. The company’s CEO admitted in 2002 that it “tried to become a portal too late in the game, and lost focus” and told Wired at the time that it was going to try and double back and focus on search again. But it never regained the lead.

That company was AltaVista.

Source: How Google made the world go viral | The Verge

An end to rabbit hole radicalization?

A new peer-reviewed study suggests that YouTube’s efforts to stop people being radicalized through its recommendation algorithm have been effective. The study monitored 1,181 people’s YouTube activity and found that only 6% watched extremist videos, with most of these deliberately subscribing to extremist channels.

Interestingly, though, the study cannot account for user behaviour prior to YouTube’s 2019 algorithm changes, which means we can only wonder about how influential the platform was in terms of radicalization up to and including pretty significant elections.

Around the time of the 2016 election, YouTube became known as a home to the rising alt-right and to massively popular conspiracy theorists. The Google-owned site had more than 1 billion users and was playing host to charismatic personalities who had developed intimate relationships with their audiences, potentially making it a powerful vector for political influence. At the time, Alex Jones’s channel, Infowars, had more than 2 million subscribers. And YouTube’s recommendation algorithm, which accounted for the majority of what people watched on the platform, looked to be pulling people deeper and deeper into dangerous delusions.

The process of “falling down the rabbit hole” was memorably illustrated by personal accounts of people who had ended up on strange paths into the dark heart of the platform, where they were intrigued and then convinced by extremist rhetoric—an interest in critiques of feminism could lead to men’s rights and then white supremacy and then calls for violence. Most troubling is that a person who was not necessarily looking for extreme content could end up watching it because the algorithm noticed a whisper of something in their previous choices. It could exacerbate a person’s worst impulses and take them to a place they wouldn’t have chosen, but would have trouble getting out of.

[…]

The… research is… important, in part because it proposes a specific, technical definition of ‘rabbit hole’. The term has been used in different ways in common speech and even in academic research. Nyhan’s team defined a “rabbit hole event” as one in which a person follows a recommendation to get to a more extreme type of video than they were previously watching. They can’t have been subscribing to the channel they end up on, or to similarly extreme channels, before the recommendation pushed them. This mechanism wasn’t common in their findings at all. They saw it act on only 1 percent of participants, accounting for only 0.002 percent of all views of extremist-channel videos.

Nyhan was careful not to say that this paper represents a total exoneration of YouTube. The platform hasn’t stopped letting its subscription feature drive traffic to extremists. It also continues to allow users to publish extremist videos. And learning that only a tiny percentage of users stumble across extremist content isn’t the same as learning that no one does; a tiny percentage of a gargantuan user base still represents a large number of people.

Source: The World Will Never Know the Truth About YouTube’s Rabbit Holes | The Atlantic

Crypto is the biggest ponzi scheme of all time

Ben McKenzie, an actor turned anti-crypto activist, argues in his new book Easy Money that while cryptocurrencies highlight legitimate flaws in the financial system, they are essentially a Ponzi scheme.

He criticises the “Hollywoodisation” of crypto and the lack of regulatory oversight, warning that the tech utopianism surrounding crypto and now AI could leave many losers in its wake. It’s funny how people seamlessly move from one grift to the next without ever being properly called out on it.

The secret behind most conspiracy-driven movements is that there is often a glimmer of truth at the centre of their beliefs. Anti-vaxxers, for instance, can point to the past behaviour of large pharmaceutical companies as evidence that the medical establishment can’t be trusted. This glimmer is what’s used to ensnare you, says Ben McKenzie, the actor and cryptocurrency critic.

[…]

After a friend urged him to buy Bitcoin, McKenzie – a former economics student with a degree from the University of Virginia – took a 24-part online course on cryptocurrencies, taught by the current US Securities and Exchange Commission chair Gary Gensler. He came away thinking the entire cryptocurrency thing was a scam. Worse, it was a scam with a lot of momentum behind it. “Advocates will tell you there is no ‘Bitcoin marketing department’,” McKenzie said. “But of course, if Bitcoin and crypto doesn’t have a product, if there is no actual tangible asset behind it, then in fact, Bitcoin and crypto is only marketing. It’s only a story.”

[…]

During the peak of 2021, some of the most recognisable people in the world – including Matt Damon, Reese Witherspoon and Kim Kardashian – began promoting cryptocurrencies and non-fungible tokens (NFTs).

McKenzie told me this is part of a more aggressive “hustle culture” in which people use their social contacts to promote products. Multi-level marketing (or MLM) and pyramid-selling schemes have existed for at least a century, but they have been transformed by technology. “In the 1950s, if you wanted to sell someone Mary Kay Cosmetics or Tupperware, you would need to invite them over to your house, cook them dinner, spend three hours trying to convince them [to buy products].” Now, he said, “the MLM can be done through TikTok and Instagram.”

Though McKenzie is openly critical of the celebrities who have pushed these products, he reserves ultimate blame for the sluggish regulators that allowed it to happen. “I think in many cases the celebrities didn’t really understand what they were selling. Which is not to absolve them of a moral, ethical [or] potentially even legal responsibility for their actions. But they don’t need to be bad people – they just see easy money, right?”

Source: “The biggest Ponzi of all time”: why Ben McKenzie became a crypto critic | New Statesman

B Lane

There’s a lot going on in this short post. It reminded me of a saying of Steve Jobs: “A players attract A players. B players attract C players.”

Now there’s something in that, in terms of the mentality that people bring to working hard and playing hard. But this post is talking about the way that people treat other people.

I’ve definitely noticed in my life, from my own studies to my kids sports teams, the tyranny of the “not quite top-level” mindset. It’s almost like you have to get over yourself to get to the “top”. What that is and whether it’s worth pursuing is another question entirely.

Swimmers
I noticed that when I swam next to the B lane swimmers, they were not nearly as kind and friendly as the C lane swimmers had been when they were my next-lane neighbours. The A lane swimmers were extremely nice, and were generous with encouragement, praise and tips. This wasn’t a hard and fast rule, but I started to notice a pattern: A, C, and D lane swimmers tended to be nice, friendly, and helpful to pretty much everyone; B lane swimmers tended to be nice to A lane and other B lane swimmers but not so much to C and D.

When I stopped doing tris and moved back to field sports, I started to notice the same thing. The very top athletes were nice to everyone and so were the middle and bottom of the pack. The not quite top players, though, were less friendly. They played more political games, and acted out their threatened feelings of being not quite good enough by being snobbish to those below them. (In retrospect, I worry I did some of this, too, especially when I was playing on a top team but was not a top player. I definitely felt a need to prove myself.)

I have since noted the same phenomenon in nearly every domain, including academia. The truly great researchers are generous and friendly; so are many of the middle of the roaders. Those who have something to prove, though, and who feel like they aren’t quite managing to do it, show definite aspects of being B lane swimmers.

Source: The B Lane Swimmer | Holly Witteman

Image: Quino AI

Money does not solve disasters like this

The Burning Man Festival started in 1986 as a small event on a beach. It was originally an event for hippies, bohemians, and those who lived outside of mainstream culture. It’s an art event.

As with most things like this, it became cool, and so people with money started going. Now, less than 40 years later, it’s dominated by the Silicon Valley elite, celebrities, and grifters.

While one person has died this year due to extreme weather events, which is a tragedy, I can’t help but feel some schadenfreude at rich people being stuck in a situation they can’t buy their way out of.

Tens of thousands of “burners” at the Burning Man festival have been told to stay in the camps, conserve food and water and are being blocked from leaving Nevada’s Black Rock desert after a slow-moving rainstorm turned the event into a mud bath.

[…]

As of noon Saturday, Nevada’s Bureau of Land Management declared the entrance to Burning Man shut down for good. “Rain over the last 24 hours has created a situation that required a full stop of vehicle movement on the playa. More rain is expected over the next few days and conditions are not expected to improve enough to allow vehicles to enter the playa,” read a BLM statement.

[…]

The festival this year was already taking place under unusual circumstances with the desert floor flooded by the remnants of Hurricane Hilary as the event was being set up.

Tara Saylor, an attendee from Ojai, California, faced the threat of the hurricane as well as a 5.1-magnitude earthquake that shook her city before she left, reported the Los Angeles Times. Saylor told the newspaper she’s seen the founders of two different companies at Burning Man this year, but added, “it doesn’t matter how much money you have, nobody can do anything about it. There’s no planes, there’s no buses.”

“Money does not solve disasters like this.”

Source: Burning Man festival-goers trapped in desert as rain turns site to mud | The Guardian

The Atlantis of the North Sea

A couple of years ago, I started subscribing to Northern Earth magazine on the recommendation of Warren Ellis. It’s quirky and brilliant.

The most recent issue contains reference to Rungholt, which I then looked up on Wikipedia. It was destroyed in the 14th century due to a storm surge. Until excavations this year people weren’t entirely sure it ever existed but it turns out it was a flourishing port town.

Rungholt was a settlement in North Frisia, in what was then the Danish Duchy of Schleswig. The area today lies in Germany. Rungholt reportedly sank beneath the waves of the North Sea when a storm tide (known as Grote Mandrenke or Den Store Manddrukning) hit the coast on 15 or 16 January 1362.

[…]

In June 2023, the German Research Foundation announced that researchers had found the probable location of Rungholt under mudflats in the Wadden Sea and had already mapped 10 square kilometers of the area.

[…]

Today it is widely accepted that Rungholt existed and was not just a local legend. Documents support this, although they mostly date from much later times (16th century). Archaeologists think Rungholt was an important town and port. It might have contained up to 500 houses, with about 3,000 people. Findings indicate trade in agricultural products and possibly amber. Supposed relics of the town have been found in the Wadden Sea, but shifting sediments make it hard to preserve them.

Source: Rungholt | Wikipedia

Reconstructing Tenochtitlan

This is an absolutely incredible piece of work, showing the complexity and sophistication of the Aztec empire. My favourite part is the slider that allows you to see how much of Mexico City is based upon the structure of Tenochtitlan.

The year is 1518. Mexico-Tenochtitlan, once an unassuming settlement in the middle of Lake Texcoco, now a bustling metropolis. It is the capital of an empire ruling over, and receiving tribute from, more than 5 million people. Tenochtitlan is home to 200.000 farmers, artisans, merchants, soldiers, priests and aristocrats. At this time, it is one of the largest cities in the world.

Today, we call this city Ciudad de Mexico - Mexico City.

Not much is left of the old Aztec - or Mexica - capital Tenochtitlan. What did this city, raised from the lake bed by hand, look like? Using historical and archeological sources, and the expertise of many, I have tried to faithfully bring this iconic city to life.

Source: A Portrait of Tenochtitlan

Taking screenagers to the forest

As a parent of a 16 year-old boy and 12 year-old girl I found this article fascinating. Written by Caleb Silverberg, now 17 years of age, it describes his decision to break free from his screen addiction and enrol in Midland, an experiential boarding school located in a forest where technology is forbidden.

Trading his smartphone for an ax, he found liberation and genuine human connection through chores like chopping firewood, living off the land, and engaging in face-to-face conversations. Silverberg advocates for a “Technology Shabbat,” a periodic break from screens, as a solution for his generation’s screen-related issues like ADHD and depression.

At 15 years old, I looked in the mirror and saw a shell of myself. My face was pale. My eyes were hollow. I needed a radical change.

I vaguely remembered one of my older sister’s friends describing her unique high school, Midland, an experiential boarding school located in the Los Padres National Forest. The school was founded in 1932 under the belief of “Needs Not Wants.” In the forest, cell phones and video games are forbidden, and replaced with a job to keep the place running: washing dishes, cleaning bathrooms, or sanitizing the mess hall. Students depend on one another.

[…]

September 2, 2021, was my first day at Midland, when I traded my smartphone for an ax.

At Midland, students must chop firewood to generate hot water for their showers and heat for their cabins and classrooms. If no one chops the wood or makes the fire, there’s a cold shower, a freezing bed, or a chilly classroom. No punishment by a teacher or adult. Just the disappointment of your peers. Your friends.

[…]

Before Midland, whenever I sat on the couch, engrossed in TikTok or Instagram, my parents would caution me: “Caleb, your brain is going to melt if you keep staring at that screen!” I dismissed their concerns at first. But eventually, I experienced life without an electronic device glued to my hand and learned they were right all along.

[…]

I have been privileged to attend Midland. But anyone can benefit from its lessons. To my generation, I would like to offer a 5,000-year-old solution to our twenty-first-century dilemma. Shabbat is the weekly sabbath in Judaic custom where individuals take 24 hours to rest and relax. This weekly reset allows our bodies and minds to recharge.

Source: Why I Traded My Smartphone for an Ax | The Free Press

What we can learn about the climate emergency from the world's response to ozone depletion in the 1980s

This article by Andrew Dessler discusses the near-miss catastrophe of ozone depletion. Anyone alive at the time can probably remember how the world came together to address the issue by phasing out chlorofluorocarbons (CFCs) through the Montreal Protocol in 1987.

Dessler draws parallels with the current climate crisis, arguing that global policy collaboration based on scientific research can solve pressing environmental issues. Along the way, he also debunks claims that transitioning to renewable energy would be economically catastrophic.

In the early 1970s, scientists theorized that certain man-made chemicals, known as chlorofluorocarbons or CFCs, had the potential to reduce the amount of ozone in our atmosphere — this became known as ozone depletion. Given the crucial role of ozone in maintaining a livable environment, this caused great concern.

Even before evidence of actual ozone depletion was observed, countries began to take action. For example, the U.S. banned many non-essential uses of the chemicals, such as propellants in aerosol spray cans. This reflected a different view at the time that government should protect its citizens rather than protect the profits of corporations.

By the mid-1980s, the world was busy negotiating the phase-out of the primary ozone-depleting CFCs when the Antarctic ozone hole (AOH) was discovered. The AOH is an annual event: over Antarctica, the majority of the ozone is destroyed during Spring. The ozone builds back up as Spring ends and, by Summer, things are basically back to normal.

[…]

The ‘reference’ future is our world, the ‘world avoided’ is the world that would have existed had we not phased out CFCs. By the 2060s, the world would have lost two-thirds of it’s ozone. This, in turn, would have greatly increased the dangerous ultraviolet radiation reaching the surface. This plot shows the UV dose at noon under clear skies in July in mid-latitudes.

Today’s value of 10 is ‘high risk’ for UV exposure, which is why public health professionals tell you to wear sunscreen when you go out. The world avoided has a UV index of 30 — three times what is considered high risk and high enough to give you a perceptible sunburn in 5 minutes.

Source: Ozone depletion: The bullet that missed |Andrew Dessler

Disaster capitalism, climate change, and agriculture

Many readers will be aware of the extreme weather conditions in Vermont USA. This has led to a disastrous year for agriculture and financial struggles for local farmers.

The article delves into the broader implications of these challenges, framing them within the context of ‘disaster capitalism,’ where the degradation of farming and natural resources is exploited for profit, exacerbating systemic issues and inequalities. We’re at the thin of the wedge with this stuff.

Vermont has suffered a miserable growing season and many Vermonters lost a great deal to the flood. Some lost their home and everything in it. But only two people lost their lives, and very few lost their jobs. This is astonishing, given the number of businesses that were shuttered for most of the last eight weeks. Some are still not open. Yes, we may have lost quite a lot, but our losses are marginal compared to the many dozens of people killed in the Maui fire and the billions of dollars of devastation in flooding elsewhere in this country. Similarly, farm losses across the country are measured not in tens of millions, but in billions. Net income from US farming is expected to drop by $30.5 billion, an 18.2% loss over 2022, which was itself not a good year. I’m sure Vermont is in that estimate, but we only add a bit to the decimal places of that number. And these few horrific numbers barely scrape the surface of loss in the US, with even greater horrors mounting everywhere else in the world. (Can Canada even measure the damages sustained in this year’s fires…)

These numbers show that we are in the age of the disaster capitalism described by Naomi Klein after she witnessed the response to Katrina. There has always been more income in the breaking of human lives than in maintaining them. In truth, the 20th century surge in disposable products and planned obsolescence was nothing but extracting profits from breakdown. Similarly, our economy was strongest as it pulled itself out of the devastation of World War II. As long as there are resources and cheap labor somewhere, somebody will profit over our loss. What is different now is that nearly resources and cheap labor are being funneled into this economics of disaster with little left to sustain actual lives.

[…]

We are in disaster capitalism and have been for a long while. I believe that capitalism has always been intrinsically tied to disaster and destruction, whether natural or engineered, but not many people share my views — because my views are from the edge spaces, the bottom and sides of this system. I sit outside and can see things that those dependent on this capitalist system for privilege and wealth do not, or can not. Upton Sinclair’s quip about the inverse relationship between a man’s paycheck and his ability to comprehend any given issue is the duct tape that holds capitalism together in these increasingly disastrous times — increasingly disastrous because of capitalism, whether the result of inadvertent externality or planned waste and breakdown. This system can only survive if enough people refuse to see that its basic function is destruction. Though of course it can also only survive if there are cheap resources and labor to mine for disaster remediation, and we are now entering the stage of late capitalism that has no more cheap things to turn into waste. Capital is feeding on itself, struggling to bring in revenues that can cover its increased costs — costs like fires and floods, scarce resources and a debilitated workforce wracked by disaster. The system needs all the propaganda it can muster to keep telling itself that it is alive and well, keeping those men with dependent paychecks blind to its demise.

Source: The future earth is already here | resilience

Eating the rich is optional, taxing them is mandatory

The article in Insider discusses the findings of the 2022 World Inequality Report, which highlights extreme levels of wealth and income inequality globally. The report was coordinated by leading economists and debunks the trickle-down economic theory.

They found that the bottom half of the global population owns just 2% of total wealth, while the top 10% holds 76%. It also notes that billionaires now hold a 3% share of global wealth, up from 1% in 1995. As everyone knows, inequality is a result of political choices and the only way to fix it is through progressive wealth taxes and perhaps even reparations.

The data serves as a complete rebuke of the trickle-down economic theory, which posits that cutting taxes on the rich will "trickle down" to those below, with the cuts eventually benefiting everyone. In America, trickle-down was exemplified by President Ronald Reagan's tax slashes. It's a theory that persists today, even though most research has shown that 50 years of tax cuts benefits the wealthy and worsens inequality.

The researchers are some of the leading minds on inequality in the entire field of economics. Chancel is the co-director of the World Inequality Lab, while Saez and Zucman have literally written a book on the rich dodging taxes and helped create wealth tax proposals for senators like Elizabeth Warren and Bernie Sanders.

[…]

Billionaire gains are a well-documented trend: The left-leaning Institute for Policy Studies and Americans for Tax Fairness found that Americans added $2.1 trillion to their wealth during the pandemic, a 70% increase.

Source: Huge 20-Year Study Shows Trickle-Down Is a Myth, Inequality Rampant | Insider

Image: Mathieu Stern

How does doing what I need make time for everything else?

I can’t remember whether someone said to me or I once read that we should manage our energy rather than our time, but it made a big difference to my life. Having control over when and how you work is a huge privilege, and enables you to be the best version of you.

People often smile or laugh when I talk about the SOFA philosophy, but giving yourself the freedom to start creative pursuits and not finish them is actually massively liberating, mood-boosting, and energy-giving.

The point being that you don’t need to ‘make time’ to do things. You just need to prioritise stuff that energises you.

I often find myself listening as someone talks about being out of time. Even the most progressive and thoughtful organizations regularly cultivate situations where the amount of work outstrips the capacity of the people in place to do it. Combine that with our appalling lack of support for caretakers, the administrative burden of accessing your healthcare, the often thankless tasks of keeping house and home, and it’s no wonder that even the people most trained in solving tricky problems run into a hard wall with this one.

[…]

We all know that time can be stretchy or compressed—we’ve experienced hours that plodded along interminably and those that whisked by in a few breaths. We’ve had days in which we got so much done we surprised ourselves and days where we got into a staring contest with the to-do list and the to-do list didn’t blink. And we’ve also had days that left us puddled on the floor and days that left us pumped up, practically leaping out of our chairs. What differentiates these experiences isn’t the number of hours in the day but the energy we get from the work. Energy makes time.

Here’s a concrete example, and perhaps a familiar one: someone is so busy with work and caretaking that they don’t make time for their art. At the end of the day they’re too tired to write or paint or make music or whathaveyou. So they don’t. Days, then weeks go by. They are more and more tired. They are getting less and less done. They take a mental health day and catch up on sleep but the exhaustion persists. Their overwhelm grows larger, becomes intolerable. The usual tactics don’t work..

Then one day they say fuck it all. They eat leftover pasta over the sink, drop mom off at her mahjongg game, and go sit in the park to draw. They draw for hours, until the sun goes down and they’re squinting under the street lights. And, lo and behold, the next day they plow through all those lingering to-dos. They see clearly that half of them were unnecessary when before they all seemed critical. They recognize a few others as things better handed off to their peers. They suddenly find time for attending to that one project they’d been procrastinating on for weeks. They sleep better. Their skin looks great. (Okay I might be exaggerating on that last one, but only mildly.)

It turns out, not doing their art was costing them time, was draining it away, little by little, like a slow but steady leak. They had assumed, wrongly, that there wasn’t enough time in the day to do their art, because they assumed (because we’re conditioned to assume) that every thing we do costs time. But that math doesn’t take energy into account, doesn’t grok that doing things that energize you gives you time back. By doing their art, a whole lot of time suddenly returned. Their art didn’t need more time; their time needed their art.

[…]

The question to ask with all those things isn’t, “how do I make time for this?” The answer to that question always disappoints, because that view of time has it forever speeding away from you. The better question is, how does doing what I need make time for everything else?

Source: Energy makes time | everything changes

Image: Aron Visuals

Note taking tools and processes

Casey Newton delves into the limitations of current note-taking apps like Obsidian, arguing that they are designed more for storing information than for sparking insights or improving thinking. He suggests that while AI has the potential to revolutionise these platforms by making them more interactive and insightful, the real challenge lies in our ability to focus and think deeply — something that software alone cannot automate.

This is partly why I write Thought Shrapnel. Not only does it force me to actually read things I’ve bookmaked, but I make sense of them, and often make links to my work and other things I’ve read.

Note-taking, after all, does not take place in a vacuum. It takes place on your computer, next to email, and Slack, and Discord, and iMessage, and the text-based social network of your choosing. In the era of alt-tabbing between these and other apps, our ability to build knowledge and draw connections is permanently challenged by what might be our ultimately futile efforts to multitask.

[…]

In short: it is probably a mistake, in the end, to ask software to improve our thinking. Even if you can rescue your attention from the acid bath of the internet; even if you can gather the most interesting data and observations into the app of your choosing; even if you revisit that data from time to time — this will not be enough. It might not even be worth trying.

The reason, sadly, is that thinking takes place in your brain. And thinking is an active pursuit — one that often happens when you are spending long stretches of time staring into space, then writing a bit, and then staring into space a bit more. It’s here that the connections are made and the insights are formed. And it is a process that stubbornly resists automation.

Which is not to say that software can’t help. Andy Matuschak, a researcher whose spectacular website offers a feast of thinking about notes and note-taking, observes that note-taking apps emphasize displaying and manipulating notes, but never making sense between them. Before I totally resign myself to the idea that a note-taking app can’t solve my problems, I will admit that on some fundamental level no one has really tried.

Source: Why note-taking apps don’t make us smarter | Platformer

Poverty is expensive. Cash helps homeless people.

Real-world studies such as this are important for busting myths about homeless people spending money recklessly compared to the rest of us.

The widely held stereotype that people experiencing homelessness would be more likely to spend extra cash on drugs, alcohol and “temptation goods” has been upended by a study that found a majority used a $7,500 payment mostly on rent, food, housing, transit and clothes.

The biases punctured by the study highlight the difficulties in developing policies to reduce homelessness, say the Canadian researchers behind it. They said the unconditional cash appeared to reduce homelessness, giving added weight to calls for a guaranteed basic income that would help adults cover essential living expenses.

[…]

They found the cash recipients each spent an average of 99 fewer days homeless than the control group, increased their savings more and also “cost” society less by spending less time in shelters.

[…]

Researchers ensured the cash was in a lump sum “to enable maximum purchasing freedom and choice” as opposed to small, consistent transfers.

Source: Canada study debunks stereotypes of homeless people’s spending habits | The Guardian

Can you use CC licenses to restrict how people use copyrighted works in AI training?

TL;DR seems to be that copyright isn’t going to prevent people data mining content to use for training AI models. However, there are protections around privacy that might come into play.

This is among the most common questions that we receive. While the answer depends on the exact circumstances, we want to clear up some misconceptions about how CC licenses function and what they do and do not cover.

You can use CC licenses to grant permission for reuse in any situation that requires permission under copyright. However, the licenses do not supersede existing limitations and exceptions; in other words, as a licensor, you cannot use the licenses to prohibit a use if it is otherwise permitted by limitations and exceptions to copyright.

This is directly relevant to AI, given that the use of copyrighted works to train AI may be protected under existing exceptions and limitations to copyright. For instance, we believe there are strong arguments that, in most cases, using copyrighted works to train generative AI models would be fair use in the United States, and such training can be protected by the text and data mining exception in the EU. However, whether these limitations apply may depend on the particular use case.

 

Source: Understanding CC Licenses and Generative AI | Creative Commons

It's all about the DMs

I think it’s fascinating that this article uses a zeugma to explain what’s happened to places that we’ve called home online. In other words, we’ve moved from social media to social media with the emphasis on the content and performance rather than the sharing.

The fatigue average people feel when it comes to posting on Instagram has pushed more users toward private posting and closed groups. Features like Close Friends (a private list of people who have access to your content) and the rise of group chats give people a safer place to share memes, gossip with friends, and even meet new people. It's less pressure — they won't mind if I didn't blur out the pimple on my forehead — but this side of Instagram hardly fulfills the original free-flowing  promise of social media.

[…]

Despite the efforts of big incumbents and buzzy new apps, the old ways of posting are gone, and people don’t want to go back. Even Adam Mosseri, the head of Instagram, admitted that users have moved on to direct messages, closed communities, and group chats. Regularly posting content is now largely confined to content creators and influencers, while non-creators are moving toward sharing bits of their lives behind private accounts.

As more people have been confronted with the consequences of constant sharing, social media has become less social and more media — a constellation of entertainment platforms where users consume content but rarely, if ever, create their own. Influencers, marketers, average users, and even social-media executives agree: Social media, as we once knew it, is dead.

[…]

And if Instagram was the bellwether for the rise and fall of the “social” social-media era, it is also a harbinger of this new era. “If you look at how teens spend their time on Instagram, they spend more time in DMs than they do in stories, and they spend more time in stories than they do in feed,” Mosseri said during the “20VC” interview. Given this changing behavior, Mosseri said the platform has shifted its resources to messaging tools. “Actually, at one point a couple years ago, I think I put the entire stories team on messaging,” he said.

Source: Social media is dead | Insider

A philosophy of travel

There’s a book by philosopher Alain de Botton called The Art of Travel. In it, he cites Seneca as bemoaning the fact that when you travel you take yourself, with all of your anxieties, frustrations, and insecurities with you. In other words, you might escape your home, but you don’t escape yourself.

This article critically examines the concept of travel, questioning its oft-claimed benefits of ‘enlightenment’ and ‘personal growth’. It cites various thinkers who have critiqued travel (including one of my favourites, Fernando Pessoa) suggesting that it can actually distance us from genuine human connection and meaningful experiences.

It’s hard not to agree with the conclusion that the allure of travel may lie in its ability to temporarily distract us from the existential dread of mortality. Perhaps we need more Marcus Aurelius in our life, who extolled one the benefits of philosophy  as being able to be find calm no matter where you are.

“A tourist is a temporarily leisured person who voluntarily visits a place away from home for the purpose of experiencing a change.” This definition is taken from the opening of “Hosts and Guests,” the classic academic volume on the anthropology of tourism. The last phrase is crucial: touristic travel exists for the sake of change. But what, exactly, gets changed? Here is a telling observation from the concluding chapter of the same book: “Tourists are less likely to borrow from their hosts than their hosts are from them, thus precipitating a chain of change in the host community.” We go to experience a change, but end up inflicting change on others.

For example, a decade ago, when I was in Abu Dhabi, I went on a guided tour of a falcon hospital. I took a photo with a falcon on my arm. I have no interest in falconry or falcons, and a generalized dislike of encounters with nonhuman animals. But the falcon hospital was one of the answers to the question, “What does one do in Abu Dhabi?” So I went. I suspect that everything about the falcon hospital, from its layout to its mission statement, is and will continue to be shaped by the visits of people like me—we unchanged changers, we tourists. (On the wall of the foyer, I recall seeing a series of “excellence in tourism” awards. Keep in mind that this is an animal hospital.)

Why might it be bad for a place to be shaped by the people who travel there, voluntarily, for the purpose of experiencing a change? The answer is that such people not only do not know what they are doing but are not even trying to learn. Consider me. It would be one thing to have such a deep passion for falconry that one is willing to fly to Abu Dhabi to pursue it, and it would be another thing to approach the visit in an aspirational spirit, with the hope of developing my life in a new direction. I was in neither position. I entered the hospital knowing that my post-Abu Dhabi life would contain exactly as much falconry as my pre-Abu Dhabi life—which is to say, zero falconry. If you are going to see something you neither value nor aspire to value, you are not doing much of anything besides locomoting.

[…]

The single most important fact about tourism is this: we already know what we will be like when we return. A vacation is not like immigrating to a foreign country, or matriculating at a university, or starting a new job, or falling in love. We embark on those pursuits with the trepidation of one who enters a tunnel not knowing who she will be when she walks out. The traveller departs confident that she will come back with the same basic interests, political beliefs, and living arrangements. Travel is a boomerang. It drops you right where you started.

[…]

Travel is fun, so it is not mysterious that we like it. What is mysterious is why we imbue it with a vast significance, an aura of virtue. If a vacation is merely the pursuit of unchanging change, an embrace of nothing, why insist on its meaning?

Source: The Case Against Travel | The New Yorker

Using semesters for goal-setting

This article suggests using the academic calendar as a framework for setting and achieving personal goals, breaking life into “semesters” to focus on mini-goals that contribute to larger ambitions. It argues that this approach can aid in time management, motivation, and skill development, offering a structured yet flexible way to make meaningful progress in various aspects of life.

As someone who spent a long time in formal education, was a teacher, and spent time working in Higher Education, it’s difficult to get out of the habit of the academic year and breaking your work into ‘terms’. Perhaps I should be leaning into it?

While it’s important to set goals, the roadmap for how to attain them can be murky. Instead of embarking without a plan toward broad ambitions, there’s value in incremental objectives in service of a larger aim. Take a page from the educational system and divide the future into “semesters” — traditionally 15 to 17 weeks long at American colleges — in which to implement minigoals to help get you where you want to go. Use the traditional academic year as a guide to help you stay on track, says Rachel Wu, an associate professor of psychology at the University of California Riverside. Many community classes and educational opportunities are offered roughly on a quarter or semester basis. “At the very least, it will help people, maybe, feel young again. I think that’s a huge benefit,” Wu says. “They can think back to that point in their life when they had that kind of organization and that might be something that works for them.” (You don’t need to follow a traditional academic structure by any means, but having a firm start and end date within a few months’ span in which to focus on certain skills or activities can help keep you motivated.)

[…]

Modeling your life after academic years allows you to adequately mark your process. It’s difficult to determine improvement with daily or even weekly goals, Fishbach says. But with a quarterly or biannual milestone, you’re more easily able to track your progress; you can more clearly look back on what you’ve learned after a 20-week intro to coding class as opposed to after a few days of instruction. The end of a semester allows for these report cards. “It just helps you feel that you’re growing as a person,” Fishbach says. “You’re not the person you were three months ago.”

[…]

A self-imposed semester system also lends itself to increased motivation due, in part, to the fresh start effect, where people are more driven to pursue goals after a “fresh start” like a new year or semester. (Fully embrace the back-to-school energy and buy some new school supplies, Wu says, “and then learn something.”) With goals that have an endpoint, called an all-or-nothing goal, Fishbach says, motivation increases as you approach the deadline. Having a distinct cutoff to your personal semester can help you stay driven knowing there’s an end in sight.

Source: Semesters for adults: How the academic school year can help with goal-setting, time management, and motivation | Vox

The uninhabitable earth

This interactive tool maps in 3D where our planet will become unihabitable due to a combination of heat, water stress, sea level rise, and tropical cyclones.

It’s an amazing and depressing visualisation, which indirectly shows how climate migration will inevitably increase in the coming decades.

Climate change is destroying people's livelihoods. By the year 2100, all areas that are red in the visualisation will become “uninhabitable”. Extreme heat, tropical cyclones, rising sea levels, water stress or a combination of those are projected to make it difficult or impossible to live there.
Source: Climate change: Mapping in 3D where the earth will become uninhabitable | Berliner Morgenpost

The world's largest climate-positive artwork provides food and nesting spots via algorithm

It’s interesting that this is being conceptualised as an ‘artwork’ rather than a technological intervention. Perhaps this is the way to deal with the climate crisis, by bringing algorithms from the cold, sterile environment of technology into the warmer, more joyful world of art?

This multidisciplinary project by Alexandra Daisy Ginsberg explores the relationship between humans, nature, and technology and aims to draw attention to the importance of insects in pollination by creating an algorithmic solution for planting designs that serve a diverse range of pollinator species.

The project changes depending on location, debuting at the Eden Project in 2021 and includes 7,000 plants across 80 varieties. These provide food and nesting spots for insects with the aim to create the world’s largest climate-positive artwork.

This is not a natural ecosystem planted outside, there are plants from all over the world. With the expert group, we chose not to focus on native plants only because they are locally appropriate so they’re not invasive. So that’s the first thing⁠—it’s an artificial landscape designed for nature, so it’s a very different way of creating an ecosystem. The other big challenge to the art world that I’m proposing is creating a climate-positive artwork. I also show in museums and I use digital media and that’s all very carbon-consuming. Here, we actually have an artwork fabricated in plants. It has its own climate impact because of the soil we’re moving, the plastic pots, the shipping of plants, but it’s here for at least three years, so it starts to outweigh that negative. There’s also a question of how we measure that, and that’s something I’m really interested in.

The other thing that’s really important to me is upending the idea of value. The art market is all about the one, the singular, the limited edition. This is an unlimited edition. The idea is: the more people who have one, the better each one is because each one supports the other. For me, that’s a strong statement to make to commissioners and when I’m trying to get more partners involved. It’s a very different way of thinking about how we create art and what its purpose is. For me, this is about playfulness, joy and celebrating nature. I call it an artwork and not a garden project because I think situating it in that context makes a powerful statement in itself.

Source: An Interview with Alexandra Daisy Ginsberg | Berlin Art Link

AI and bullshit jobs

I had the pleasure of working with the large-brained Helen Beetham when I was at Jisc just over a decade ago. In this long-ish post, she covers quite a few areas of, with plenty of links, and pulls the threads together around graduate jobs and an AI curriculum.

While I could have quoted a lot of this, especially around innovation, the stories being told to graduates, and the neo-colonial nature of AI companies, I’ve gone for the last three paragraphs in which Helen discusses bullshit jobs. I’d highly recommend reading the whole thing.

My hope is that, rather than a curriculum ‘for AI’, these conversations would create space for learning that addresses human challenges. Getting life on earth out of the mess that fossil fuels and rampant production have made of it will take all the graduate labour we can produce and more. Nobody is going to be without meaningful work - not climate scientists or green energy specialists or engineers or geologists or computer scientists or materials chemists or statisticians. Not a single person educated in the STEM subjects beloved of governments everywhere can be left idle. But nor are we getting out of this without social scientists to help us weather the social and economic and political storms, humanities graduates to develop new laws and policies, new philosophies and imagined futures, and professionals committed to a just transition in their own spheres of work. And there are other crises, entwined with the climate crisis, that graduates need and want to address, such as galloping economic inequality, crises of democracy and human rights, food and water shortages, and the crisis of care. Universities can offer fewer and fewer guarantees of secure employment and decent pay, but they can offer meaningful work, justifying students’ investment in the future.

The longer you look at the things ChatGPT can do, the more they resemble what David Graeber described as Bullshit Jobs - jobs that don’t need doing. While I don’t agree with the way he singles out specific job roles, Graeber is surely right that more and more work involves doing things with data and information and ‘content’ that has no value beyond maintaining those systems. And one claim he made that is borne out by workplace research is that meaningless work is bad for people’s mental health.

It’s a nice little aphorism that ‘if AI can do your job, AI should do your job’. But here’s a different one. If AI can ‘do’ your job, you deserve a better job. And if meaningless jobs are bad for workers’ mental health, how much worse are they for all our futures? The phrase ‘fiddling while Rome burns’ hardly begins to cover our present situation. As the polycrisis heats up, the crisis of not enough water-cooler text is not something any graduate should have to care about, nor any university curriculum either.

Source: ‘Luckily, we love tedious work’ | Helen Beetham

We need to talk about AI porn

Thought Shrapnel is a prude-free zone, especially as the porn industry tends to be a technological innovator. It’s important to say, though, that the objectification of women and non-consensual generation of pornography is not just a bad thing but societally corrosive.

By now, we’re familiar with AI models being able to create images of almost anything. I’ve read of wonderful recent advances in the world of architecture, for example. Some of the most popular AI generators have filters to prevent abuse, but of course there are many others.

As this article details, a lot of porn has already been generated. Again, prudishness aside relating to people’s kinks, there are all kind of philosophical, political, legal, and issues at play here. Child pornography is abhorrent; how is our legal system going to deal with AI generated versions? What about the inevitable ‘shaming’ of people via AI generated sex acts?

All of this is a canary in the coalmine for what happens in society at large. And this is why philosophical training is important: it helps you grapple with the implications of technology, the ‘why’ as well as the what. I’ve got a lot more thoughts on this, but I actually think it would be a really good topic to discuss as part of the next season of the WAO podcast.

“Create anything,” Mage.Space’s landing page invites users with a text box underneath. Type in the name of a major celebrity, and Mage will generate their image using Stable Diffusion, an open source, text-to-image machine learning model. Type in the name of the same celebrity plus the word “nude” or a specific sex act, and Mage will generate a blurred image and prompt you to upgrade to a “Basic” account for $4 a month, or a “Pro Plan” for $15 a month. “NSFW content is only available to premium members.” the prompt says.

[…]

Since Mage by default saves every image generated on the site, clicking on a username will reveal their entire image generation history, another wall of images that often includes hundreds or thousands of AI-generated sexual images of various celebrities made by just one of Mage’s many users. A user’s image generation history is presented in reverse chronological order, revealing how their experimentation with the technology evolves over time.

Scrolling through a user’s image generation history feels like an unvarnished peek into their id. In one user’s feed, I saw eight images of the cartoon character from the children’s’ show Ben 10, Gwen Tennyson, in a revealing maid’s uniform. Then, nine images of her making the “ahegao” face in front of an erect penis. Then more than a dozen images of her in bed, in pajamas, with very large breasts. Earlier the same day, that user generated dozens of innocuous images of various female celebrities in the style of red carpet or fashion magazine photos. Scrolling down further, I can see the user fixate on specific celebrities and fictional characters, Disney princesses, anime characters, and actresses, each rotated through a series of images posing them in lingerie, schoolgirl uniforms, and hardcore pornography. Each image represents a fraction of a penny in profit to the person who created the custom Stable Diffusion model that generated it.

[…]

Generating pornographic images of real people is against the Mage Discord community’s rules, which the community strictly enforces because it’s also against Discord’s platform-wide community guidelines. A previous Mage Discord was suspended in March for this reason. While 404 Media has seen multiple instances of non-consensual images of real people and methods for creating them, the Discord community self-polices: users flag such content, and it’s removed quickly. As one Mage user chided another after they shared an AI-generated nude image of Jennifer Lawrence: “posting celeb-related content is forbidden by discord and our discord was shut down a few weeks ago because of celeb content, check [the rules.] you can create it on mage, but not share it here.”

Source: Inside the AI Porn Marketplace Where Everything and Everyone Is for Sale | 404 Media

Raising the average level of creativity using AI

Like most infants, my daughter wanted to speak before she was able to. Unlike most infants, she was extremely frustrated that she couldn’t do so.

Most people can’t draw as well as they would like. Many people become exasperated when they can’t adequately express their ideas in written form.

AI can help with all of this and, in my case, already is. This article, which draws on the results of three academic studies, is interesting in terms of how we can raise the average level of human creativity with the use of AI.

Each of the three papers directly compares AI-powered creativity and human creative effort in controlled experiments. The first major paper is from my colleagues at Wharton. They staged an idea generation contest: pitting ChatGPT-4 against the students in a popular innovation class that has historically led to many startups. The researchers — Karan Girotra, Lennart Meincke, Christian Terwiesch, and Karl Ulrich — used human judges to assess idea quality, and found that ChatGPT-4 generated more, cheaper and better ideas than the students. Even more impressive, from a business perspective, was that the purchase intent from outside judges was higher for the AI-generated ideas as well! Of the 40 best ideas rated by the judges, 35 came from ChatGPT.

A second paper conducted a wide-ranging crowdsourcing contest, asking people to come up with business ideas based on reusing, recycling, or sharing products as part of the circular economy. The researchers (Léonard Boussioux, Jacqueline N. Lane, Miaomiao Zhang, Vladimir Jacimovic, and Karim R. Lakhani) then had judges rate those ideas, and compared them to the ones generated by GPT-4. The overall quality level of the AI and human-generated ideas were similar, but the AI was judged to be better on feasibility and impact, while the humans generated more novel ideas.

The final paper did something a bit different, focusing on creative writing ideas, rather than business ideas. The study by Anil R. Doshi and Oliver P. Hauser compared humans working alone to write short stories to humans who used AI to suggest 3-5 possible topics. Again, the AI proved helpful: humans with AI help created stories that were judged as significantly more novel and more interesting than those written by humans alone. There were, however, two interesting caveats. First, the most creative people were helped least by the AI, and AI ideas were generally judged to be more similar to each other than ideas generated by people. Though again, this was using AI purely for generating a small set of ideas, not for writing tasks.

Source: Automating creativity | Ethan Mollick

CAPTCHA is an arms race we're losing against AI bots

I saw a story that GitHub’s CAPTCHA had become ridiculously hard and multiple people weren’t able to solve it within the time limit. GitHub have presumably upgraded their system because the version we’ve come to know and despise (“click on all of the traffic lights”) is now solved faster by AI than by humans.

“Life is a campaign against malice” said the 17th century Jesuit priest and philosopher Baltasar Gracián. How right he was.

You definitely have tried to access some websites and have gotten bombarded with a series of puzzles requiring you to correctly identify traffic lights, buses, or crosswalks to prove that you’re indeed human before you log in.

Known as Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA), the technology is intended to protect a website from fraud and abuse without creating friction. The puzzles are meant to ensure that only valid users are able to access the site and not automated invasions.

Google replaced CAPTCHA with a more advanced tool called reCAPTCHA in 2019, but the team’s technical lead Aaron Malenfant told the Verge at the time that the technology would no longer be viable in 10 years’ time because advanced tech would allow the Turing test to run in the background.

His prediction was right. Artificial Intelligence (AI) bots are fast-evolving and are now beating the reCAPTCHA methodology used to confirm the validity and personhood of the users of various websites. They do this by imitating how the human brain and vision work. In fact, AI bots are measuring up to humans, and even beating them, in numerous facets.

Source: AI bots are better than humans at solving CAPTCHA puzzles | Quartz

When it's getting too hot for plants to photosynthesize, you know we've got a problem

I used to run a site called extinction.fyi which documented the climate emergency. This definitely would have been an article I would have featured on there.

As would the news that French nuclear power stations had to stop running when the water in the rivers next to where they’re situated became too hot. The additional heat of water coming out of the cooling circuits would raise the temperature further, killing aquatic life.

Leaves in the world’s tropical forests are approaching critical temperatures at which photosynthesis breaks down—and a fraction have likely already passed that threshold—raising alarms about the fate of these essential ecosystems under the most pessimistic projections of human-driven climate change, reports a new study.

[…]

The ECOSTRESS data, along with follow-up measurements from the ground, showed that tropical canopy temperatures tend to peak at around 34°C, though some regions experienced temperatures that exceeded 40°C. Because there is a surprising amount of temperature variation between the individual leaves on a single tree, the researchers estimated that about a tenth of a percent of all leaves in tropical forests are annually pushed beyond the critical threshold of 46.7°C that marks the breaking point of photosynthesis.

[…]

As global temperatures continue to rise, more tropical leaves will be pushed beyond their photosynthetic capabilities, causing plants to perish. While the researchers emphasized that there is a lot of uncertainty in their models, they warned that an increase in global air temperatures of about 3.9°C could trigger a major photosynthetic meltdown for tropical forests. This estimated increase is within the range of climate models that project a future where human greenhouse gas emissions don’t begin to fall until after 2080.

Source: It’s Getting Too Hot for Tropical Trees to Photosynthesize, Scientists Warn | VICE

Structural insecurity

This fantastic piece by Astra Taylor, whose book The Age of Insecurity is on my to-read list, is sadly behind a paywall. I managed to bypass it, which is why I’m excerpting so much in this post.

What I like is the separation of inequality from insecurity and the difference between existential insecurity from ‘manufactured’ insecurity. Being published in The New York Times, the context is the American economy which largely exists without a social safety net.

The situation is better in the UK/Europe, but we still live in a much more economically precarious world than our parents and grandparents did. And perhaps that’s why everyone’s anxious all of the time.

Since 2020, the richest 1 percent has captured nearly two-thirds of all new wealth globally — almost twice as much money as the rest of the world’s population. At the beginning of last year, it was estimated that 10 billionaire men possessed six times as much wealth as the poorest three billion people on Earth. In the United States, the richest 10 percent of households own more than 70 percent of the country’s assets.

Such statistics are appalling. They have also become familiar. Since it was catapulted onto the national stage more than a decade ago by Occupy Wall Street, “inequality” has been a frequent topic of conversation in American political life. It helped animate Bernie Sanders’s influential campaigns, reshaped academic scholarship, shifted public policy, and continues to galvanize protest. And yet, however important focusing on the inequality crisis has been, it has also proven insufficient.

If we want to understand contemporary economic life, we need a more expansive framework. We need to think about insecurity. Where inequality encourages us to look up and down, to note extremes of indigence and opulence, insecurity encourages us to look sideways and recognize potentially powerful commonalities.

If inequality can be captured in statistics, insecurity requires talking about feelings: It is, to borrow a phrase from feminism, personal as well as political. Economic issues, I’ve come to realize, are also emotional ones: the spike of shame when a bill collector calls, the adrenaline when the rent or mortgage is due, the foreboding when you think about retirement.

And unlike inequality, insecurity is more than a binary of haves and have-nots. Its universality reveals the degree to which unnecessary suffering is widespread — even among those who appear to be doing well. We are all, to varying degrees, overwhelmed and apprehensive, fearful of what the future might have in store. We are on guard, anxious, incomplete and exposed to risk. To cope, we scramble and strive, shoring ourselves up against potential threats. We work hard, shop hard, hustle, get credentialed, scrimp and save, invest, diet, self-medicate, meditate, exercise, exfoliate.

[…]

Rather than something to pathologize, I want us to see insecurity as an opportunity. We all need protection from life’s hazards, natural or human-made. The simple acceptance of our mutual vulnerability — of the fact that we all need and deserve care throughout our lives — has potentially transformative implications. When we spur people on with insecurity because we expect the worst from them, we create a vicious cycle that stokes desperation and division while facilitating the kind of cutthroat competition and consumption that has brought our fragile planet to a catastrophic brink. When we extend trust and support to others, we improve everyone’s security — including our own.

[…]

Insecurity, after all, is what makes us human, and it is also what allows us to connect and change. “Nothing in Nature ‘becomes itself’ without being vulnerable,” writes the physician Gabor Maté in “The Myth of Normal.” “The mightiest tree’s growth requires soft and supple shoots, just as the hardest-shelled crustacean must first molt and become soft.” There is no growth, he observes, without emotional vulnerability.

The same also applies to societies. Recognizing our shared existential insecurity, and understanding how it is currently used against us, can be a first step toward forging solidarity. Solidarity, in the end, is one of the most important forms of security we can possess — the security of confronting our shared predicament as humans on this planet in crisis, together.

Source: Why Does Everyone Feel So Insecure All the Time? | The New York Times

Emoji, we salute you 🫡

I remember going to a conference session about a decade ago when people were still on the fence about emoji and the presenter said that they were the most important form of visual communication since hieroglyphics.

It’s hard to argue otherwise. I’ve been a huge fan since I noticed that adding a smiley to my emails made a huge difference to the way that people received and understood them. It’s a way of communication at a distance; how would we navigate group chats and social networks without them? 😅

Valeria Pfeifer is a cognitive scientist at the University of Arizona. She is one of a small group of researchers who has studied how emojis affect our thinking. She tells me that my newfound joy makes sense. Emojis “convey this additional complex layer of meaning that words just don’t really seem to get at,” she says. Many a word nerd has fretted that emojis are making us—and our communication—dumber. But Pfeifer and other cognitive scientists and linguists are beginning to explain what makes them special.

In a book called The Emoji Code, British cognitive linguist Vyvyan Evans describes emojis as “incontrovertibly the world’s first truly universal communication.” That might seem like a tall claim for an ever-expanding set of symbols whose meanings can be fickle. But language evolves, and these ideograms have become the lingua franca of digital communication.

[…]

Perhaps the first study of how these visual representations activate the brain was presented at a conference in 2006.1 Computer scientist Masahide Yuasa, then at Tokyo Denki University in Japan, and his colleagues wanted to see whether our noggins interpret abstract symbolic representations of faces—emoticons made of punctuation marks—in the same way as photographic images of them. They popped several college students into a brain scanning machine (they used functional magnetic resonance imaging, or fMRI) and showed them realistic images of happy and sad faces, as well as scrambled versions of these pictures. They also showed them happy and sad emoticons, along with short random collections of punctuation.

The photos lit up a brain region associated with faces. The emoticons didn’t. But they did activate a different area thought to be involved in deciding whether something is emotionally negative or positive. The group’s later work, published in 2011, extended this finding, reporting that emoticons at the end of a sentence made verbal and nonverbal areas of the brain respond more enthusiastically to written text.2 “Just as prosody enriches vocal expressions,” the researchers wrote in their earlier paper, the emoticons seemed to be layering on more meaning and impact. The effect is like a shot of meaning-making caffeine—pure emotional charge.

Source: Your 🧠 On Emoji | Nautilus

Hacking the vagus nerve

It looks like electric stimulation of the vagus nerve using something like a TENS machine could help with everything from obesity and depression to Long Covid.

One of the universities local to me is leading some of this work, and they have a page about it here.

From plunging your face into icy water, to piercing the small flap of cartilage in front of your ear, the internet is awash with tips for hacking this system that carries signals between the brain and chest and abdominal organs.

[…]

Meanwhile, scientific interest in vagus nerve stimulation is exploding, with studies investigating it as a potential treatment for everything from obesity to depression, arthritis and Covid-related fatigue. So, what exactly is the vagus nerve, and is all this hype warranted?

The vagus nerve is, in fact, a pair of nerves that serve as a two-way communication channel between the brain and the heart, lungs and abdominal organs, plus structures such as the oesophagus and voice box, helping to control involuntary processes, including breathing, heart rate, digestion and immune responses. They are also an important part of the parasympathetic nervous system, which governs the “rest and digest” processes, and relaxes the body after periods of stress or danger that activate our sympathetic “fight or flight” responses.

[…]

Search “vagus nerve hacks” on TikTok, and you’ll be bombarded with tips ranging from humming in a low voice to twisting your neck and rolling your eyes, to practising yoga or meditation exercises.

Researchers who study the vagus nerve are broadly sceptical of such claims. Though such techniques may help you to feel calmer and happier by activating the autonomic nervous system, the vagus nerve is only one component of that. “If your heart rate slows, then your vagus nerve is being stimulated,” says Tracey. “However, the nerve fibres that slow your heart rate may not be the same fibres that control your inflammation. It may also depend on whether your vagus nerves are healthy.”

Similarly, immersing your face in cold water may also slow down your heart rate by triggering something called the mammalian dive reflex, which also triggers breath-holding and diverts blood from the limbs to the core. This may serve to protect us from drowning by conserving oxygen, but it involves sympathetic and parasympathetic responses.

Electrical stimulation may hold greater promise though. One thing that makes the vagus nerves so attractive is surgical accessibility in the neck. “It is quite easy to implant some device that will try to stimulate them,” says Dr Benjamin Metcalfe at the University of Bath, who is studying how the body responds to electrical vagus nerve stimulation. “The other reason they’re attractive is because they connect to so many different organ systems. There is a growing body of evidence to suggest that vagus nerve stimulation will treat a wide range of diseases and disorders – everything from rheumatoid arthritis through to depression and alcoholism.”

Source: The key to depression, obesity, alcoholism – and more? Why the vagus nerve is so exciting to scientists | The Guardian

Reality and the templated life

This article reviews a book entitled A Web of Our Own Making by Antón Barba-Kay which reminded me a lot of an issue of Audrey Watters' Second Breakfast newsletter about the templated body.

What does it mean for there to be multiple, constructed realities. When everyone has a smartwatch and is tracking everything, does that make their life both qualitatively and quantitatively different?

Some of these observations, though apt, aren’t exactly new — that the possibility of tracking our steps for so-called health reasons distorts our relationship with a simple country walk, that the fundamentally data-driven nature of smartphone culture “is such as to translate larger human questions about how to live into technical puzzles that may be ‘problem-solved,’” that Twitter timelines and Instagram feeds have become a saccharine way of capturing our limited and precious attention by distracting us from the less immediately rewarding elements of being human.

But the fusillade intensity with which Barba-Kay produces these inconvenient truths renders them impossible to ignore; from the details we start to perceive, little by little, the devil. As Barba-Kay writes, “digital technology is training us not simply to a new sense of what is real and really good, but to a new understanding of the contrasts within which we see that reality.” In other words, our awareness of what the virtual world cannot do has made us hungrier for those elements of reality from which we have not yet become alienated.

If reality is changing, it is because, for better and for worse, our lives are increasingly determined by one specific vision of human ingenuity: a vision that valorizes those elements of human life we freely choose (or think we do) over those we once saw as given to us — our bodies, our families, our communities. Digital culture functions today as the Enlightenment cosmopolis once did: as a fantasy in which society reshapes itself along the lines of affinity.

[…]

“Where once it was occasionally possible to opt out of ‘reality’ (by taking drugs, say),” Barba-Kay writes in the book’s perhaps most chilling line, “it is now increasingly necessary to think about how to opt in to it.” And we need to. It may be the most important decision we make in our lives.

Source: How the Internet obeys you | The New Atlantis

Temporarily Abled

This blog post which reflects on Cindy Li’s pithy quotation that “we’re all just temporarily abled”. I’m recovering from a rib injury sustained on holiday, so I feel the author’s pain. Hopefully it won’t take me months to recover, but it’s impacting my exercise regime and mental outlook.

It reminded me of a post on the Microsoft Design blog called Kill Your Personas which dives into temporarily disabilities. Definitely worth a read.

June 6th I was on vacation at the beach with my family and tried something that, looking back now, maybe I’m too old for. And I injured my knee.

[…]

That was almost three months ago now. I’m still limping. It’s getting better but it’s slow. The doctor told me, “Just be aware: this isn’t days or weeks recovery. This is months.”

Since then, I’ve tried to make the best of summer while kids are out of school but my mobility has been limited.

Through all of it, I’ve found myself noticing “accessibility” helpers more than ever before: that railing on the stairs, that ramp off to the side of the building, that elevator tucked away in the back.

All things I rarely noticed before but have since become vital.

And that phrase plays on repeat in my head — “we’re all just temporarily abled”.

[…]

I suppose it’s easy to misunderstand ability as a binary thing. But now I’m understanding more how fluid it is, as it inevitably comes in and out of each of our lives — “100% of people” in their lifetimes.

In classic human fashion, it’s one of those things you take for granted until it’s gone.

Source: “We’re All Just Temporarily Abled” | Jim Nielsen’s Blog

The only way to outlaw encryption is to outlaw encryption

An enjoyable take by The Register on the UK’s Online Safety Bill. I was particularly interested by the link to Veilid, a new secure peer-to-peer network for apps which is like the offspring of IPFS and Tor.

Many others have made the point about how much government ministers like the end-to-end encryption of their own WhatsApp communications. But they’d also like to break into, well… everyone else’s.

The official madness over data security is particularly bad in the UK. The British state is a world class incompetent at protecting its own data. In the past couple of weeks alone, we have seen the hacking of the Electoral Commission, the state body in charge of elections, the mass exposure of birth, marriage and death data, and the bulk release of confidential personnel information of a number of police forces, most notably the Police Service Northern Ireland. This was immediately picked up by terrorists who like killing police. It doesn't get worse than that.

This same state is, of course, the one demanding that to “protect children,” it should get access to whatever encrypted citizen communication it likes via the Online Safety Bill, which is now rumored to be going through British Parliament in October. This is akin to giving an alcoholic uncle the keys to every booze shop in town to “protect children”: you will find Uncle in a drunken coma with the doors wide open and the stock disappearing by the vanload.

[…]

It is just stupidity stacked on incompetence balanced on political Dunning Krugerism, and the advent of Veilid drowns the lot in a tidal wave of foetid futility. What can a government do about a framework? What can it do about open source?

[…]

The only way to outlaw encryption is to outlaw encryption. Anything less will fail, as it is always possible in software to create kits of parts, all legal by themselves, that can be linked together to provide encryption with no single entity to legislate against. Our industry is fully aware of this. Criminals know it too. Ordinary people will learn it as well, if they have to. This information is free to everyone – except the politicians, it seems. For them, reality is far too expensive.

Source: Last rites for UK’s ridiculous Online Safety Bill | The Register

On 'Executive Function Theft'

This post by Abigail Goben popped up in several places and is one of those that gives a name to someone most people will recognise. It’s an important differentiation on what is often called ‘care work’ as it highlights how something important is taken when repetitive, administrative work is outsourced to other humans.

Executive Function Theft (EFT) is the deliberate abdication of decision-making, tasks, and responsibilities that are perceived as administrative or repetitive, of lesser importance, or aren’t pleasant or shiny, to another person, with the result that the receiving person’s executive function becomes so exhausted that they are unable to participate in, contribute to, or enjoy higher level efforts.

[…]

In the workplace, an example of EFT often plays out in the inequality of service labor, and I will specifically use academic service work here as it is my current workplace. Think of the people who end up with more than their share of administrative maintenance tasks — such as organizing get well cards, scheduling workshops, or taking notes. Consider the colleague who has a list of committee appointments a yard long and has just gotten a request to be on Another! Important! (is it?) Committee. These individuals may not be doing these tasks strictly because it is their job responsibility, but because they see a need to be filled or have been asked or tasked with taking on more service that they feel they cannot turn down. And notice how those tasks so often fall to the same group of people — especially when we get to any form of implementation or ongoing commitment rather than the “fun” ideation phase. One way to calculate these service loads would be to count the number of committees and task forces held by and expected of various individuals — who gets a pass and who gets penalized if they don’t say yes.

Quite often there’s a gendered component as to who is tasked with these additional service responsibilities — the office housekeeping as well as the care tasks of the workplace.

[…]

I will admit to never having been able to read Cal Newport’s Deep Work all the way through — I got too irritated — but I would point to his dismissive naming of the idea of “shallow work”, which he defines as logistical and often repetitive tasks, such as writing short emails. Newport recommends entirely stopping or poorly performing that work; I read this as encouraging readers to commit EFT against others around them. Too often the dump off of what are critical responsibilities is not to a specifically tasked and appreciated administrator but instead onto the junior, female, minoritized, non-tenure track, or precarious employees. It’s the maintenance work of keeping the workplace going and we do not appreciate the maintainers. Similarly thinking about EFT in the workplace, I was reminded of the guy who got famous with the Four Hour Workweek book and how we were all just supposed to outsource things to nameless underpaid gig workers. Notably, when looking for a summary of that book, I found an article by Cal Newport praising it.

Source: Executive Function Theft | Hedgehog Librarian

Image: Uday Mittal

Why anxious people find it difficult to control their emotions

This explains a lot. Basically, studies have found that a specific part of the brain behaves differently in anxious individuals, and this difference might explain why they struggle with emotional control. It’s like a traffic jam in the brain that makes it harder for the signals to get through, leading to difficulties in managing emotional reactions.

Anxious individuals consistently fail in controlling emotional behavior, leading to excessive avoidance, a trait that prevents learning through exposure. Although the origin of this failure is unclear, one candidate system involves control of emotional actions, coordinated through lateral frontopolar cortex (FPl) via amygdala and sensorimotor connections. Using structural, functional, and neurochemical evidence, we show how FPl-based emotional action control fails in highly-anxious individuals. Their FPl is overexcitable, as indexed by GABA/glutamate ratio at rest, and receives stronger amygdalofugal projections than non-anxious male participants. Yet, high-anxious individuals fail to recruit FPl during emotional action control, relying instead on dorsolateral and medial prefrontal areas. This functional anatomical shift is proportional to FPl excitability and amygdalofugal projections strength. The findings characterize circuit-level vulnerabilities in anxious individuals, showing that even mild emotional challenges can saturate FPl neural range, leading to a neural bottleneck in the control of emotional action tendencies.
Source: Anxious individuals shift emotion control from lateral frontal pole to dorsolateral prefrontal cortex | Nature Communications

Jobs, AI, and human worth

I’m sharing this article to make a comment about the framing for these kinds of things. The article is an extract from a book by David Runciman, and implicitly links human worth to jobs.

Part of the existential dread of AI replacing humans is that, if your job is your life, then who are you without the doing? Instead of hand-wringing about robots and machines, perhaps our time is better spent figuring out who we are and how we want to flourish.

In the slew of reports published in the 2010s looking to identify which jobs were most at risk of being automated out of existence, sports officials usually ranked very high up the list (the best known of these studies, by Carl Benedikt Frey and Michael Osborne in 2017, put a 98% probability on sports officiating being phased out by computers within 20 years). Here, after all, is a human enterprise where the single most important qualification is an ability to get the answer right. In or out? Ball or strike? Fair or foul? These are decisions that need to be underpinned by accurate intelligence. The technology does not even have to be state-of-the-art to produce far better answers than humans can. Hawk-Eye systems have been outperforming human eyesight for nearly 20 years. They were first officially adopted for tennis line calls in 2006, to check cricket umpire decisions in 2009 and, more recently, to rule on football offsides.

[…]

Efficiency – even accuracy – turns out not to be the main requirement of the organisations that employ people to give decisions during sports games. They are also highly sensitive to appearance, which includes a wish to keep their sport looking and feeling like it’s still a human-centred enterprise. Smart technology can do many things, but in the absence of convincingly humanoid robots, it can’t really do that. So actual people are required to stand between the machines and those on the receiving end of their judgments. The result is more work all round.

[…]

How things look isn’t everything. There are significant parts of every organisation where appearance doesn’t matter so much, in the backrooms and maybe even the boardrooms that the public never gets to see. Behind-the-scenes technical knowledge that underpins the performance of public-facing tasks is likely to be an increasingly precarious basis for reliable employment. This is true of many professions, including accountancy, consultancy and the law. There will still be lots of work for the people who deal with people. But the business of gathering data, processing information and searching for precedents can now more reliably be done by machines. The people who used to undertake this work, especially those in entry-level jobs such as clerks, administrative assistants and paralegals, might not be OK.

[…]

History offers a partial guide to what might happen. Worries about automation displacing human workers are as old as the idea of the job itself. The Industrial Revolution disrupted many kinds of labour – especially on the land – and undid entire ways of life. The transition was grim for those who had to switch from one mode of subsistence existence to another. Yet the end result was many more jobs, not fewer. Factories brought in machines to do faster and more reliably what humans used to do or could never do at all; at the same time, factories were where the new jobs appeared, involving the performance of tasks that were never required before the coming of the machines. This pattern has repeated itself time and again: new technology displaces familiar forms of work, causing massively painful disruption. It is little consolation to the people who lose their jobs to be told that soon enough there will be entirely new ways of earning a living. But there will.

Source: The end of work: which jobs will survive the AI revolution? | The Guardian

Did people in the past look older for their age?

I’m 42 but look much younger than my father did at his age. And I’m sure that he looked younger than my grandfather did at his age. This is an interesting article about why.

There’s a meme that makes the rounds every so often. It’s a group shot of the cast of 80s sitcom Cheers, with the ages of each actor displayed on the image. Every time it comes back around, people express surprise and disbelief that this group of what looks to be middle-aged folk are actually in their twenties and thirties. With his greying moustache and receding hairline, John Ratzenberger looks far older than what we might now imagine a 30-something man to look like – current 35-year-old actors Michael Cera and Nicholas Braun, for example, look significantly younger in comparison.

[…]

While factors like diet, skincare and aesthetic procedures can make us physically look younger – hairstyles, make-up and fashion also play a role in how youthful we appear. While someone with 2023-esque micro bangs may scream young to us, we associate photos of 80s hairstyles and big shoulder pads with being older, even if the person in the image is the same age as us. This is partly because of how we consider trendy hairstyles and fashion of that time to be outdated.

[…]

Today we might be obsessed with preserving our youth, but this wasn’t always the case. In past decades, popular trends often existed to make young people appear more sophisticated and bold. “The dramatic nature of [80s] hairstyles often conveyed a sense of confidence and authority, which could be associated with older individuals,” hairdresser Gwenda Harmon says. “Certain hairstyles of the 1980s actually made some youth appear older due to their bold and sophisticated nature.”

Source: Why did people in the past look so much older? | Dazed

Ask culture vs guess culture

I’ve seen this culture clash outlined before, although I wouldn’t necessarily use the labels ‘ask’ and ‘guess for the different approaches. I was raised by a mother who very much (still!) relies on inference to live her life. I’ve found being much more direct useful in living my own.

(I’d also note that the author seems to be playing fast-and-loose with the term ‘Western’ to mean ‘American’ here as British people are much more likely to be guessers than askers in my experience.)

Ask culture and guess culture are vastly different in behavior and expectations. Here are some highlights:

Ask culture expectations

  • Ask for what you want, even if it seems out of reach or like a big unreasonable request
  • Take care of your own needs, and others will take care of theirs
  • It’s fine to make requests that people will probably say no to
  • People say yes to requests that you truly feel good about, say no to ones they don’t
Guess culture expectations
  • Only ask for something if you’re already pretty sure the other person will say yes
  • Read an abundance of indirect contextual cues to determine if your request is reasonable to make
  • It’s rude to put someone in a position where they have to say no to you
  • If the appropriate feelers and context are set, you will never have to make your request at all.
[...]

If you’re more a guess-culture person, asking people for help without knowing their circumstances can feel rude or intrusive. Broadcasting publicly your need for help can feel awkward and vulnerable.

[…]

Western society is very much ask culture. A classic example can be found in proverbs. “A squeaky wheel gets the grease” is an American proverb, enforcing the ideas of individualism and that asking for what you want will benefit you.

Source: Ask vs guess culture | Tech and Tea

Life in 2050

Futurist Stowe Boyd imagines life in 2050, through three scenarios. I can’t help but think that ‘Collapseland’ (excerpted below) is the most likely outcome. Sadly.

Collapseland is where everything goes pear shaped. Dithering by governments and corporations has allowed climate change to push the world into increased heat, drought, and violent weather. The Human Spring of the 2020s led to a conservative backlash and a suppression of the movement itself. It also led to a suppression of advancements in AI, since it became associated with the science orientation of the movement.

But governments and corporations get their act together in the late 2020s and 2030s to avert an extinction event via the global adoption of solar. However, this only comes after a serious ecological catastrophe has occurred. Inequality remains unchecked, and the poor become much poorer.

Collapseland businesses are much like businesses of 2015. Most efforts are directed toward basic requirements — like desalinating water, relocating people away from low-lying or drought stricken areas, and struggling with food production challenges. As a result, little innovation has taken place. It’s no different from the company you work for today, except longer hours, fewer co-workers, less pay, and much more dust. To increase profits, corporations have cut staff and forced existing workers to work harder.

Source: What Will a Corporation Look Like in 2050? | WIRED

Context is everything, especially with books

When I was younger I slogged through some terrible books that, because they were deemed ‘classics’, I thought I should read. Thankfully, I’m a lot more ruthless with non-fiction and, in fact, these days I’m happy to give up on a book I’m not finding enjoyable/relevant after 50 pages.

The interesting thing, though, is that it’s always worth coming back to books. Sometimes, a change in interest, age, or context can completely change your relationship with them.

I used to believe that every book has an objective value. And I used to believe that this value is fixed and universal.

Now, I believe it’s much more useful to say something in this form: this book has this value to this person in this context.

[…]

The idea that a book’s value is best judged alongside the notional reader and their current context has some corollaries:

First, reading the books that your heroes cite as important will not necessarily be rewarding. If you admire Bret Victor for his work on computing interfaces, only some of his library will be high value to you because his library also includes lots of books that have nothing to do with UI.

Second, yes, it’s likely that “great books” may be high value in some more universal sense that is independent of reader and context. And, yes, this high value may come from something inherent in the quality of the books, rather than from the fact that they are about themes that are more relevant to more people. Yes, I probably wouldn’t dispute this. But I suspect that relevance to person and context is a better guide to what to read.

Third, book recommendation systems based on your reading history can be helpful, but only so much. You, now, are not represented by your reading history. You’ve changed. Making recommendations based on books you read twenty years ago might produce good books for you, now. But probably not.

Source: Is this a good book for me, now? | Mary Rose Cook

Image: Thought Catalog

Using AI to aid with banning books is another level of dystopia

I’m very much optimistic about the uses of AI tools such as LLMs to help with specific tasks. See the latest post on my personal blog, for example.

However, what I’m concerned about is AI decision-making. In this case, a crazy law is being implemented by people who haven’t read the books in questions who outsource the decision to a language model that doesn’t really understand what’s being asked of it.

According to an August 11 article in the Iowa state newspaper The Gazette, spotted by PEN America, the Mason City Community School District recently removed 19 books from its collection ahead of its quickly approaching 2023-24 academic year. The ban attempts to comply with a new law requiring Iowa school library catalogs to be both “age appropriate” and devoid of “descriptions or visual depictions of a sex act.” Speaking with The Gazette last week, Mason City’s Assistant Superintendent of Curriculum and Instruction Bridgette Exman argued it was “simply not feasible to read every book and filter for these new requirements.”

“Frankly, we have more important things to do than spend a lot of time trying to figure out how to protect kids from books,” Exman tells PopSci via email. “At the same time, we do have a legal and ethical obligation to comply with the law. Our goal here really is a defensible process.”

According to The Gazette, the resulting strategy involved compiling a master list of commonly challenged books, then utilizing a previously unnamed “AI software” to supposedly provide textual analysis for each title. Flagged books were then removed from Mason City’s 7-12th grade school library collections and “stored in the Administrative Center” as educators “await further guidance or clarity.” Titles included Alice Walker’s The Color Purple, Margaret Atwood’s The Handmaid’s Tale, Toni Morrison’s Beloved, and Buzz Bissinger’s Friday Night Lights.

Source: School district uses ChatGPT to help remove library books | Popular Science

Income Level 4

I’ve had reason to reflect on how easy my life is recently. Not only am I a straight, middle-aged, able-bodied white guy but, according to Gapminder, I’m living life in a prosperous country on ‘Level 4’.

Worth pondering.

People at Income Level 4 earn more than $32 a day. At this Income Level, we find thee richest billion  on the planet, who work in jobs that typically require at least 12 years of education — something those on the lower Income Levels cannot currently aspire to.

People at this Income Level are able to buy consumer goods, fly abroad with their families on holiday, and eat out at restaurants. None of these luxuries are available to people living at Levels 1 – 3, but are considered normal by most at Level 4.

The food they eat is often highly nutritious and diverse, as well as being rich in protein and vitamins. People at Level 4 can even buy pre-prepared food to save them time on cooking.

At this Income Level, not only electricity but also Internet connections are extremely reliable. Nearly every home has at least one TV and computer, and kitchens are equipped with stoves, ovens, toasters, and microwaves. Homes also have baths and showers installed with both hot and cold water — another luxury that is extremely rare at any other Income Level.

Instead of bikes and mopeds, people living at Level 4 usually own a car — sometimes even two per family. Public transport is also organized and readily available to everybody.

Perhaps most importantly, life at Level 4 is more secure than it is for people at levels 1 – 3. Not only are doors and windows locked securely, valuable property is usually insured against damage or theft. People also have bank accounts, access to credit, and pension funds for when they retire. Healthcare is also readily available to people living at Level 4, with basic medication available at affordable rates from local shops, and advanced and emergency medical treatment available locally to almost everybody.

Source: Income Level 4 | Gapminder

AI sports recruitment

A few weeks ago, I watched part of the EA Sports FC 24 announcement video with my son. The CEO of Electronic Arts mentioned something that anyone who’s been paying attention already knows: games like FIFA (of which EA Sports FC is the spiritual successor) has transformed football.

There’s a symbiotic link between how people play football and how people play football video games. What’s less easy to spot is how talent is identified, nurtured, and shaped. That’s where articles like this one about AI in the behind-the-scenes processes comes in.

As someone with two very sporty kids, and one of whom is potentially on a pathway to professional football, this is fascinating to me.

There's no doubt that professional sports have been primed for the potential impact of artificial intelligence. Innovations have the potential to transform the way we consume and analyze games from both an administrative and fan standpoint. For soccer specifically, there are opportunities for live game analytics, match outcome modeling, ball tracking, player recruitment, and even injury predicting — the opportunities are seemingly endless.

[…]

Luis Cortell, senior recruiting coach for men’s soccer for NCSA College Recruiting, is a little less bullish, but still believes AI can be an asset. “Right now, soccer involves more of a feel for the player, and an understanding of the game, and there aren’t any success metrics for college performance," he said. “While AI won’t fully fill that gap, there is an opportunity to help provide additional context.”

At the same time, people in the industry should be wary of idealizing AI as a godsend. “People expect AI to be amazing, to not make errors or if it makes errors, it makes errors rarely,” Shapiro said. The fact is, predictive models will always make mistakes but both researchers and investors alike want to make sure that AI innovations in the space can make “fewer errors and less expensive errors” than the ones made by human beings.

[…]

The MLS said in a statement that ai.io’s technology “eliminates barriers like cost, geography and time commitment that traditionally limit the accessibility of talent discovery programs.” Felton-Thomas said it is more important to understand that ai.io will “democratize” the recruiting process for the MLS, ensuring physical skills are the most important metric when leagues and clubs are deciding where to invest their money. “What we’re looking to do is give the clubs a higher confidence level when they’re making these decisions on who to sign and who to watch.” By implementing the AI-powered app, recruitment timelines are also expected to be cut.

Source: Will AI revolutionize professional soccer recruitment? | Engadget

Secret family recipes (on the side of containers)

I love this 😂

In response to our call, 174 readers wrote in with stories of plagiarized family recipes. Hailing from New York to Nicaragua, from Auckland, New Zealand, to Baghpat, India, they prove that this is a global phenomenon. The majority of readers described devastating discoveries: They found supposedly secret recipes in the pages of famous cookbooks, and heard confessions from parents whose legendary dessert recipes came from the side of Karo Syrup bottles.

[…]

Several readers joked about family members threatening to take a secret recipe to the grave. To our surprise, we also received a story of a late-in-life confession:

My uncle was known around town as the “fudge man.” Every year, he would make pounds of it for Christmas parties, bake sales, and gifts. It was legendary—people would beg him for the recipe. When he was ill in the hospital, before he passed, his wife begged him for the recipe so she could keep his memory going. He replied, “It’s on the side of the marshmallow fluff container.”

–Jess Heller, Minnesota

Source: The Dirty Secret of ‘Secret Family Recipes’ | Gastro Obscura

Quake II remaster brings online LAN gaming

I can’t wait to play this. While I enjoy playing Doom Eternal by myself occasionally, LAN gaming the Quake II takes me back to being a teenager!

[embed]www.youtube.com/watch

In a surprise announcement at QuakeCon, publisher Bethesda Softworks  announced the immediate availability of a light remaster of the classic first-person shooter Quake II, similar to the one for the first Quake that was released not that long ago.

[…]

You get a lot of content for 10 bucks; the package includes the game’s original campaign, both previously released expansions, Quake II 64, and a new campaign called Call of the Machine with 28 levels developed by Machine Games (the team behind the recent Wolfenstein games).

There’s also split-screen local multiplayer (up to four players), as well as LAN and online multiplayer.

Source: Quake II gets a remaster for PC and consoles—and it’s exactly what it needs to be | Ars Technica

Introducing Homo naledi

Science is awesome. I love the way that we continue to rediscover and reinterpret what it means to be human based on archaeology and scientific theories.

Using an unparalleled range of tests, experts are investigating whether a group of ‘ape-men’ succeeded in creating a complex human-like culture - potentially thousands of years before our own species, Homo sapiens, managed to do so.

Adding to the mystery is the fact that the now long-extinct species behaved in several key ways like modern humans - and yet appears to have been able to do that with brains which were only a third the size of ours.

The evidence assembled so far is beginning to suggest that these small-brained ‘ape-men’ may have been able to do seven remarkable things:

  • Envisage an afterlife (in other words, a belief that some form of existence continues beyond death).
  • Believe that an afterlife occurs in some sort of ‘underworld’, located beneath (rather than on or above) the world of the living. That implies that they may have developed some very embryonic sense of cosmology.
  • Conceive the idea of physically burying their dead - in that ‘underworld’.
  • Give grave goods to dead members of their community - an apparent act that implies that they may have believed that the dead would somehow be able to use them in an afterlife.
  • Carry out potential rituals - specifically funerary meals - inside their ‘underworld’.
  • Create rudimentary art (abstract designs) around the entrance to at least one of the burial chambers in that ‘underworld’.
  • Plan some sort of relatively complex lighting system (either a succession of small fires and/or torches) to enable them to penetrate their ‘underworld’ and take their dead there.
[...]

“We know that what we’re discovering breaks totally new ground - and is therefore likely to be controversial. That’s why we are deploying every possible type of investigative technology to ensure that the maximum amount of additional evidence can be found,” said the leader of the Rising Star Cave investigation, National Geographic and University of Witwatersrand palaeoanthropologist, Professor Lee Berger, who with co-investigator, human evolution expert Professor John Hawks, has just published a detailed National Geographic book on the discoveries, entitled Cave of Bones.

Source: Scientific discovery casts doubt on our understanding of human evolution | The Independent

Landmark ruling in climate trial

I’ve only been there once, but Montana is an absolutely beautiful place. And much like other places that people call home, those that live there want to keep it that way.

It’s really heartening to see youth-led action be successful in a court of law. I hope that this leads to more cases being brought around the world.

A Montana state court today sided with young people who sued the state for promoting the fossil fuel industry through its energy policy, which they alleged prohibits Montana from weighing greenhouse gas emissions in approving the development of new factories and power plants. This prohibition, 16 plaintiffs ages 5 to 22 successfully argued, violates their constitutional right to a "clean and healthful environment in Montana for present and future generations."

Experts previously predicted that a win for youths in Montana would set an important legal precedent for how courts can hold states accountable for climate inaction. The same legal organization representing Montana’s young plaintiffs, Our Children’s Trust, is currently pursuing similar cases in four other states, The Washington Post reported.

[…]

Montana tried to argue that adjusting its energy policy and other statutes would have “no meaningful impact or appreciable effect,” the Post reported, because climate change is a global issue. Montana Assistant Attorney General Michael Russell described the testimony as a “week-long airing of political grievances that properly belong in the Legislature, not a court of law,” according to the Post. Notably, the state did not meaningfully attempt to dispute climate science.

[…]

Experts told Scientific American that Montana’s emissions are significant given its population size, emitting in 2019 “about 32 million tons of carbon dioxide.” That’s “about as much as Ireland, which has a population six times larger,” Scientific American reported. Young people suing alleged that Montana had “never denied a permit for a fossil fuel project,” the Post reported.

Source: Montana loses fight against youth climate activists in landmark ruling | Ars Technica

The tyranny of efficiency

Coupled with this (cited) Slate article about what people did with their free time 20 years ago, it seems like Gen Z has gone beyond ‘touching grass’ to rediscover… errands?

I’m being facetious, but there’s some good points here about the tyranny of being able to do everything from your phone and the comfort of your bed/sofa. We live quite close to the middle of our small town, and there’s nothing I enjoy more than strolling into town to pick something up.

Also, the book mentioned, Four Thousand Weeks, is well worth reading. It’s excellent as a way to reflect on your philosophy of work/life.

man in white t-shirt and beige pants sitting on front load washing machine

If we associate leaving the house only with seeing friends, seeking pleasure, or simply getting out for the love of god, it follows that going out is something we only do voluntarily, rather than for the general business of staying on top of things. Under this purview, “having weekend plans” means having fun plans. While staying home—whether to work, take care of business, or relax—is “not plans.” And that distinction feels correct. Sure, we could argue that sitting on our computers or puttering around our houses should count as plans, but neither connotes the life-giving kineticism of executing a plan outside, with friends, or in public, and I don’t think we should pretend it does. Regardless, this is a distinctly modern mode of operation.

[…]

In ‘Four Thousand Weeks’, a philosophical book about time management by Oliver Burkeman, he explains something he calls “the efficiency trap,” whereby the more we do, the more there is to do. This is of course counter to the mythology of productivity, which tells us that the sooner we can get things done, the sooner we’ll be able to relax and enjoy ourselves. Instead, Burkeman argues that “what needs doing” simply “expands to fill the time available.” Become efficient at work and you’ll be given more work. Answer all your emails and you’ll get all the replies and more. Finally reach your goals and you’ll think of new ones. There’s not actually an end in sight, and so by placing our faith in ever-decreasing segments of time spent on individual to-dos, we simply create the opportunity to complete more errands in less time, and in a more boring way. This reality presents us with the following paradox: Because it now takes less time to do things, we have way too much to do.

[…]

In Four Thousand Weeks, a philosophical book about time management by Oliver Burkeman, he explains something he calls “the efficiency trap,” whereby the more we do, the more there is to do. This is of course counter to the mythology of productivity, which tells us that the sooner we can get things done, the sooner we’ll be able to relax and enjoy ourselves. Instead, Burkeman argues that “what needs doing” simply “expands to fill the time available.” Become efficient at work and you’ll be given more work. Answer all your emails and you’ll get all the replies and more. Finally reach your goals and you’ll think of new ones. There’s not actually an end in sight, and so by placing our faith in ever-decreasing segments of time spent on individual to-dos, we simply create the opportunity to complete more errands in less time, and in a more boring way. This reality presents us with the following paradox: Because it now takes less time to do things, we have way too much to do.

Source: #153: Rethinking “weekend plans” | Haley Nahman

Image: No Revisions

Calendars as data layers

I run my life by Google Calendar, so I found this post about different data layers including both past and future data points really interesting.

As someone who also pays attention to their stress level as reported by a Garmin smartwatch, and as someone who suffers from migraines, this kind of data would be juxtaposition would be super-interesting to me.

Our digital calendars turned out to be just marginally better than their pen and paper predecessors. And since their release, neither Outlook nor Google Calendar have really changed in any meaningful way.

[…]

Flights, for example, should be native calendar objects with their own unique attributes to highlight key moments such as boarding times or possible delays.

This gets us to an interesting question: If our calendars were able to support other types of calendar activities, what else could we map onto them?

[…]

Something I never really noticed before is that we only use our calendars to look forward in time, never to reflect on things that happened in the past. That feels like a missed opportunity.

[…]

My biggest gripe with almost all quantified self tools is that they are input-only devices. They are able to collect data, but unable to return any meaningful output. My Garmin watch can tell my current level of stress based on my heart-rate variability, but not what has caused that stress or how I can prevent it in the future. It lacks context.

Once I view the data alongside other events, however, things start to make more sense. Adding workouts or meditation sessions, for example, would give me even more context to understand (and manage) stress.

[…]

Once you start to see the calendar as a time machine that covers more than just future plans, you’ll realize that almost any activity could live in your calendar. As long as it has a time dimension, it can be visualized as a native calendar layer.

Source: Multi-layered calendars | julian.digital

Your personal time management strategy sucks

Too many pointless TLAs (Three Letter Acronyms) in this blog post, but it’s redeemed by having a core message that human beings are not cogs in a machine and have a finite time to accomplish their goals.

Although there have been plenty of people I’ve come across in my career who are always “super busy” there’s one person in my orbit in particular at the moment who seems to carry the world on their shoulders. As this post points out, this is due to an inability to focus on what’s important.

(The diagram below exudes peak 1990s management consultancy vibes, so I’m only including it for comedy value.) 

People inform me they are busy as if it is a badge of honor. For me, it is a signal that they have a weak personal Playing to Win strategy.

[…]

[T]o have an effective personal strategy, you need to be deliberative about choosing where to deploy your limited available hours in tasks that your particular set of capabilities enable you to generate a win by creating disproportionate value for your organization. And, since this doesn’t happen automatically, you need a personal management system for doing it on an ongoing basis — because on this front, eternal vigilance is the price of effectiveness.

[…]

Remember that strategy is what you do not what you say. So, even if you don’t think of yourself as having a personal Playing to Win strategy, step back and reverse engineer what it actually is based on what you actually do.

Source: Being ‘Too Busy’ Means Your Personal Strategy Sucks | Roger Martin

Giving advice online without mansplaining

In the last few days I rediscovered this post from Another Angry Woman via someone linking to it. I don’t think I shared it at the time, but it helped me understand how even well-meaning advice can be spectacularly unhelpful.

I’d recommend reading the whole thing, especially if you identify as male. However, the main takeaway for me was to ask if the person wants advice. Most recently, for example, I enquired if someone was “just venting or would like advice based on my experience”. They replied they were just venting.

Clippy
Remember Clippy from Microsoft Office? You’re just trying to write a letter, and this insufferable little paperclip is popping up constantly with his vapid googly eyes and awful eyebrows and that fucking condescending smirk and his horrid little bendy body and oh god the colour of that speech bubble, like slightly worrying vaginal discharge, and the “it looks like” why is it so passive aggressive why- Sorry, I lost myself there. In short, Clippy was an irritation, and you’re giving someone’s notifications tab the vibes of using Word in 1997, which nobody wants to go back to.

[…]

There is a gendered element to this, too. Mansplaining is something which most women on the internet have experienced fairly frequently. It is exhausting. It is patronising. It is the background hum of patriarchy.

You might not personally be mansplaining. Maybe you’re not even a man. But those who have been on the receiving end of mansplaining are sensitive to it. Your attempt to help can come across as mansplaining, and throw you straight into the draining and exhausting pile.

[…]

When someone is not asking a question, they probably do not want advice. This means, you have not been invited to give it. Your advice is not welcome. No matter how much you think there’s a solution to their predicament or they could do things a little differently, you’ve not been invited to share your advice. So don’t.

Source: How to give advice on the internet without being an utter menace | Another angry woman

Saying "I don't know" is a privilege

Paul Graham is a smart guy. He’s a venture capitalist, and here he’s in conversation with Tyler Cowen, an economist. Both men are further to the right, politically, than me — so I winced a little at their references to the ‘far left’.

That being said, it’s an interesting episode and Cowen’s rapid-fire questioning is a useful tactic for getting guests to be more candid than they would otherwise be. What I found fascinating about Graham’s responses was that he would often say “I don’t know” instead of the prosaic “that’s a great question”. I guess once you’ve got the standing he has, there’s no need for him to pretend otherwise.

Tyler and Y Combinator co-founder Paul Graham sat down at his home in the English countryside to discuss what areas of talent judgment his co-founder and wife Jessica Livingston is better at, whether young founders have gotten rarer, whether he still takes a dim view of solo founders, how to 2x ambition in the developed world, on the minute past which a Y Combinator interviewer is unlikely to change their mind, what YC learned after rejecting companies, how he got over his fear of flying, Florentine history, why almost all good artists are underrated, what’s gone wrong in art, why new homes and neighborhoods are ugly, why he wants to visit the Dark Ages, why he’s optimistic about Britain and San Fransisco, the challenges of regulating AI, whether we’re underinvesting in high-cost interruption activities, walking, soundproofing, fame, and more.

Source: Paul Graham on Ambition, Art, and Evaluating Talent (Ep. 186) | Conversations with Tyler

Actions speak louder than words

This article popped up on my feeds a couple of weeks ago and I recognised the organisation behind the website. Having listened to an excellent Art of Manliness podcast episode featuring Dr John Barry, I knew that ‘The Centre for Male Psychology’ is actually legit.

What this article discusses I’ve found true in my own life. I am by temperament introspective, which means for many years I thought the answer to any form of melancholy came in thinking. But, actually, I’ve found the answer to be in action in doing things such as climbing mountains, running, and doing things with my hands.

The two ways of regulating emotions have implications for the field of mental health, which relies predominately on talking therapy – in particular talking about feelings. Does this not suggest that there could be, and perhaps needs to be, more emphasis on discussing the therapeutic value of action? It may not be practical to conduct therapy while engaged in physical activity such as a gym workout or while out walking in the streets, but the therapeutic discussion can at least focus more on the “doing” aspects of a man’s life. For example a therapist might ask how did problem XYZ make a man act out, along with exploring which physical activities or responses might help him to modulate such emotions more optimally in future. Does riding a Jet Ski, or going for a jog, or building some wooden furniture make him feel better or worse? Does that difficult manoeuvre in the video game remind of difficulties in his relationship with his girlfriend? Does the same video game provide some optimism that if he can get past the difficult manoeuvre within the game then perhaps he can find a way around the impasse with his girlfriend? Activities like these provide a symbolic canvas on which men project, and then work through various scenarios of real life, with potential to shift affective resonances in the process.

When a man talks about how he operated a lathe, did some welding, restored a bit of discarded and broken furniture, might he be sharing a strategy of how he successfully redirected suicidal feelings? Perhaps we should not be so quick to shut down these conversations with accusations of being work obsessed, effectively stymieing natural male expressions with injunctions to talk less about activities and to communicate more effusively with feelings words. For many men, activities are the preferred canvases on which they can process feelings and carve out some genuine psychological equilibrium.

This is probably a reason why men talk so much about work, sports, building things, computer games, recreational activities – it may be their preferred way of communicating the ways they wrestle with psychological issues. Sadly, the therapeutic industry is quick to chastise men’s preference for intelligent actions, conflating them with pathological reflexes such as unconscious acts of aggression, dependence on drugs and booze, and other destructive versions of so-called “acting-out” as they are so often branded.

Source: Men tend to regulate their emotions through actions rather than words | The Centre for Male Psychology

Marginally Employed

For various reasons I will explain elsewhere this post by Dan Sinker, which I read this morning, was particularly important to my life. Dan is awesome, and am thankful for his candor.

A month or so ago I was at a cookout for an old work colleague and friend. It was 100% people who I haven't seen since at least the pandemic hit, and most of them a few years even before that. And so, obviously, the first question anyone would ask is "what are you up to now," and, well, that's sort of a hard question for me to answer. As has been established on this blog before, I do a lot of things. Some of them are job-shaped, while others look, well, like an entire fictional town in Ohio. All of them are important to me and all of them are a little hard to explain.

And so it was on that night that my brain—sometimes a friend, other times an enemy—responded “Well, I’m marginally employed,” before launching into a full-throated explanation of the wild world of Question Mark, Ohio to increasingly concerned onlookers.

I left the cookout feeling pretty weird, if I’m being honest. Since I’d last seen most of the folks that were there, they’d moved on to really incredible work. And here I was cobbling together bits and pieces of job-shaped things while spinning a yarn about a town plagued with disappearances. And then there was the term I used: marginally employed, which felt right but also felt a little embarrassing.

And then something happened. I talked about this on Says Who afterward and I heard from a bunch of folks who said, basically: Hey, me too. And I realized like, wait a second: I want to be doing work like I’m doing. Work that’s weird and exciting and, admittedly, hard to describe to people while also gnawing on some ribs. I don’t want to be doing a 9-5. I want to be marginally employed.

And so I made a patch. It’s really simple, just maroon on white and set in Cooper Black, my very favorite typeface. It reads, simply, “Marginally Employed.” No apologies, no frills. I love it. You might too. It’s $10 and ships free in the US.

Source: Best Laid Plans | dansinker.com

Almhouses as a way forward for social housing

While I’m aware of medieval almshouses, I didn’t know they were still a thing. It’s great that there’s more being built now than in Victorian times, and I hope this kind of approach, along with co-operative housing, becomes even more of a thing.

It looks like they could be particularly useful for helping everything from ending rough sleeping, to stopping premature deaths in old age from poverty.

Almshouses are the oldest form of social housing in the world: the oldest foundation still in existence dates from about 990. Legally, historically and socially unique – exempt from right to buy legislation and so remaining as a permanent part of the community once gifted – there are 30,000 throughout the UK, providing affordable housing for more than 36,000 residents.

They are owned and managed by a network of more than 1,600 independent charities, and nearly all market towns in the UK have at least one almshouse. In some rural areas, they are the only provider of affordable, community housing.

In a time of a severe shortage of affordable rental accommodation, almshouse charities have long been trying to get attention from philanthropists and the government to make the case that their role is more vital than ever – that they should be put at the forefront of the community housing concept, providing an “exemplar housing model”.

Now, a new research project, the Almshouse Longevity Study from Bayes business school, has given extra ammunition to their call finding that those fortunate enough to live in an almshouse receive a longevity boost of almost two and a half years – equating to an extra 15% of future life for someone aged in their early 70s.

Despite not looming large in the public’s awareness, more almshouses are being built today than have been since the Victorian era; while most are for elderly people, some have no age restrictions and are able to accommodate families, people with disabilities and key workers.

[...]

Paul Mullis, the chief executive of the Durham Aged Mineworkers’ Homes Association, the biggest almshouse charity in the UK, agreed. “Our residents know they can look forward to tomorrow because the things that make people’s lives worth living haven’t changed in the 1,000 years that almshouses were created to target: community, safe and secure housing, a sense of purpose.”

Meredith Whittaker on AI doomerism

This interview with Signal CEO Meredith Whittaker in Slate is so awesome. She brings the AI 'doomer' narrative back time and again both to surveillance capitalism, and the massive mismatch between marginalised people currently having harm done to them and the potential harm done to very powerful people.

What we’re calling machine learning or artificial intelligence is basically statistical systems that make predictions based on large amounts of data. So in the case of the companies we’re talking about, we’re talking about data that was gathered through surveillance, or some variant of the surveillance business model, that is then used to train these systems, that are then being claimed to be intelligent, or capable of making significant decisions that shape our lives and opportunities—even though this data is often very flimsy.

[...]

We are in a world where private corporations have unfathomably complex and detailed dossiers about billions and billions of people, and increasingly provide the infrastructures for our social and economic institutions. Whether that is providing so-called A.I. models that are outsourcing decision-making or providing cloud support that is ultimately placing incredibly sensitive information, again, in the hands of a handful of corporations that are centralizing these functions with very little transparency and almost no accountability. That is not an inevitable situation: We know who the actors are, we know where they live. We have some sense of what interventions could be healthy for moving toward something that is more supportive of the public good.

[...]

My concern with some of the arguments that are so-called existential, the most existential, is that they are implicitly arguing that we need to wait until the people who are most privileged now, who are not threatened currently, are in fact threatened before we consider a risk big enough to care about. Right now, low-wage workers, people who are historically marginalized, Black people, women, disabled people, people in countries that are on the cusp of climate catastrophe—many, many folks are at risk. Their existence is threatened or otherwise shaped and harmed by the deployment of these systems.... So my concern is that if we wait for an existential threat that also includes the most privileged person in the entire world, we are implicitly saying—maybe not out loud, but the structure of that argument is—that the threats to people who are minoritized and harmed now don’t matter until they matter for that most privileged person in the world. That’s another way of sitting on our hands while these harms play out. That is my core concern with the focus on the long-term, instead of the focus on the short-term.

Source: A.I. Doom Narratives Are Hiding What We Should Be Most Afraid Of | Slate

Playing the right game

Thanks to Laura for pointing me towards this post by Simone Stolzoff. There’s so much to unpack, which perhaps I’ll do in a separate post. It touches on reputation and credentialing, but also motivation, gamification, and “value self-determination”.

Extracting yourself from the false gods of vanity metrics is hard, but massively liberating. It starts with realising small things like you don’t actually need to keep up a ‘streak’ on Duolingo to learn a language. But there’s a through line from that to coming to the conclusion that you don’t need to win awards for your work, or the status symbol of a fancy car/house.

I interviewed over 100 workers—from kayak guides in Alaska to Wall Street bankers in Manhattan—and met several people who achieved nearly every goal set out for them, only to realize they were winning a game they didn’t enjoy playing.

How do so many of us find ourselves in this position, climbing ladders we don’t truly want to be on? C. Thi Nguyen, a philosopher and game design researcher at the University of Utah, has some answers. Nguyen coined the term “value capture,” a phenomenon that I came to see all around me after I learned about it. Here’s how it works.

Most games establish a world with a clear goal and rankable achievements: Pac-Man must eat all the dots; Mario must save the princess. Video games offer what Nguyen calls “a seductive level of value clarity.” Get points, defeat the boss, win. In many ways, video games are the only true meritocratic games people can play. Everyone plays within clearly defined boundaries, with the same set of inputs. The most skilled wins.

Our careers are different. The games we play with our working hours also come with their own values and metrics that matter. Success is measured by how much money you make—for your company and for yourself. Promotions, bonuses, and raises mark the path to success, like dots along the Pac-Man maze.

These metrics are seductive because of their simplicity. “You might have a nuanced personal definition of success,” Nguyen told me, “but once someone presents you with these simple quantified representations of a value—especially ones that are shared across a company—that clarity trumps your subtler values.” In other words, it is easier to adopt the values of the game than to determine your own. That’s value capture.

There are countless examples of value capture in daily life. You get a Fitbit because you want to improve your health but become obsessed with maximizing your steps. You become a professor in order to inspire students but become fixated on how often your research is cited. You join Twitter because you want to connect with others but become preoccupied by the virality of your content. Naturally, maximizing your steps or citations or retweets is good for the platforms on which these status games are played.

Source: Playing a Career Game You Actually Want to Win | Every

Bad work

Not just artists - we all go through life’s ups and downs, good periods and bad. Right now is the least tolerant time since I’ve been alive. Everyone’s supposed to be on it 24/7.

Viewed in the context of the episode, Sylvester is talking, specifically, about the “professionalization” and “commercialization” of art, and basically the hype machine of the art world.
Source: Artists must be allowed to make bad work | Austin Kleon

Digital wallets for verifiable credentials

Purdue University had something like this almost a decade ago, but there’s even more call for this kind of thing now, post-pandemic and in a Verifiable Credentials landscape.

Everyone’s addicted to marrying ‘skills’ with ‘jobs’ but I think there’s definitely an Open Recognition aspect to all of this.

ASU Pocket captures students’ traditional and non-traditional educational credentials, which are now, with the emergence of verifiable credentials, more portable than ever before. This gives students the autonomy to securely own, control and share their holistic evidence of learning with employers.

A digital wallet, like ASU Pocket, holds verifiable credentials – which are digital representations of real-world credentials like government-issued IDs, passports, driver’s licenses, birth certificates, educational degrees, professional certifications, awards, and so on. In the past, these credentials have been stored in physical form, making them susceptible to fraud and loss. However, with advances in technology, these credentials can be stored electronically, using cryptographic techniques to ensure their authenticity. This makes it possible to verify the credential without revealing sensitive information, such as a social security number.

[…]

At ASU Pocket, we also view verifiable credentials as an important tool for social impact. They provide a way for people to document their skills and accomplishments, which can be used to gain new opportunities. For example, someone with a verifiable skill credential for customer service might be able to use it to get a job in a call center. Likewise, someone with a verifiable credential for computer programming might be able to use it to get a job as a software developer.

In both cases, the verifiable credential provides a way for the individual to demonstrate their skills and qualifications gained through or outside of traditional learning pathways. This is especially impactful for marginalized groups who may have difficulty obtaining traditional credentials, such as degrees or certifications.

Source: ASU Pocket: A digital wallet to capture learners’ real-time achievements

AI generated art aesthetic

Yes, it’s “just typing prompts” but then drawing is “just making marks on paper”. Love this aesthetic.

AI generated intercom

Source: An Improbable Future

Bad coffee

I love this essay, not because I necessarily agree with it, but because I agree with the vibe of it. It’s from 2019, so it must have come via my social feeds.

Keith Pandolfi used to own a coffee shop which served the best barista-crafted flat whites, etc. in the area. These days he drinks Maxwell House. Likewise, there’s areas of my life in which I’ve gone from being very fussy to not really caring. It’s the letting go that matters.

Coffee mug

The best cup of coffee I ever had was the dirty Viennese blend my teenage friends and I would sip out of chipped ceramic mugs at a cafe near the University of Cincinnati while smoking clove cigarettes and listening to Sisters of Mercy records, imagining what it would be like to be older than we were. The best cup of coffee was the one I enjoyed alone each morning during my freshman year at Ohio State, huddled in the back of a Rax restaurant reading the college paper and dealing with the onset of an anxiety disorder that would never quite be cured.

[…]

I don’t have memories of… bonding experiences taking place over a flat white at a Manhattan coffee shop or a $5 cup of nitro iced coffee at a Brooklyn cafe. High-end coffee doesn’t usually lend itself to such moments. Instead, it’s something to be fussed over and praised; you talk more about its origin and its roaster, its flavor notes and its brewing method than you talk to the person you’re enjoying it with. Bad coffee is the stuff you make a full pot of on the weekends just in case some friends stop by. It’s what you sip when you’re alone at the mechanic’s shop getting your oil change, thinking about where your life has taken you; what you nurse as you wait for a loved one to get through a tough surgery. It’s the Sanka you share with an elderly great aunt while listening to her tell stories you’ve heard a thousand times before. Bad coffee is there for you. It is bottomless. It is perfect.

Source: The Case for Bad Coffee | Serious Eats

Ungrading the university experience

There’s some discussion of students ‘gaming the system’ in this article about ungrading university courses, but nothing much about AI tools like ChatGPT. This movement has been gathering pace for years, and I think that we’re at a tipping point.

Hopefully, this will lead to more Open Recognition practices rather than just breaking down chunky credentials into microcredentials.

[A]dvocates say the most important reason to adopt un-grading is that students have become so preoccupied with grades, they aren't actually learning.

“Grades are not a representation of student learning, as hard as it is for us to break the mindset that if the student got an A it means they learned,” said Jody Greene, special adviser to the provost for educational equity and academic success at UCSC, where several faculty are experimenting with various forms of un-grading.

If a student already knew the material before taking the class and got that A, “they didn’t learn anything,” said Greene. And “if the student came in and struggled to get a C-plus, they may have learned a lot.”

[…]

[S]everal colleges and universities… already practice unconventional forms of grading. At Reed College in Oregon, students aren’t shown their grades so that they can “focus on learning, not on grades,” the college says. Students at New College of Florida complete contracts establishing their goals, then get written evaluations about how they’re doing. And students at Brown University in Rhode Island have a choice among written evaluations that only they see, results of “satisfactory” or “no credit,” and letter grades — A, B or C, but no D or F.

MIT has what it calls “ramp-up grading” for first-year students. In their first semesters, they get only a “pass,” without a letter; if they don’t pass, no grade is recorded at all. In their second semesters, they get letter grades, but grades of D and F are not recorded on their transcripts.

Source: Some colleges are eliminating freshman grades by ‘ungrading’ | NPR

Reducing website carbon emissions by blocking ads

Blocking advertising on the web is not only good for increasing the speed and privacy of your own web browsing, but also good for the planet.

What is the environmental impact of visiting the homepage of a media site? What part do advertising, and analytics, play when it comes to the carbon footprint? We tried to answer these questions using GreenFrame, a solution we developed to measure the footprint of our own developments.

The results are insightful: up to 70% of the electricity consumption (and therefore carbon emissions) caused by visiting a French media site is triggered by advertisements and stats. Therefore, using an ad blocker even becomes an ecological gesture.

[…]

Overall we observe the same thing: the carbon footprint of a website decreases if there are no ads or trackers on the website. The difference is significant: Between 32% and 70% of the energy consumed by the browser and the network is due to monetization.

The websites analyzed generate between 70 and 130 million visits per month, and their work has therefore a real impact on the environment.

Reducing the consumption of one of these sites by only 10% (20mWh), per visit for a site with 100 million monthly visitors is equivalent to saving 24,000 kWh per year.

Source: Media Websites: 70% of the Carbon Footprint Caused by Ads and Stats | Marmelab

Switching to Arc

It’s not often I’ll post tools here, but after a few days of using it, I’m sold on the Arc browser.

My web browser history over the last quarter of a century goes something like: Netscape Navigator –> Internet Explorer –> Firefox –> Chrome –> Brave –> Arc.

Perhaps I should record a screencast, but the three things I like most about Arc are:

  • Build in 'Spaces' (for client projects, etc.)
  • Split screen view
  • Easel (clip *live* parts of web pages)
Like Brave, it's based on Chromium, so all of the Chrome web extensions I've been using just work. Awesome. There's lots of reviews on YouTube.

Experience a calmer, more personal internet in this browser designed for you. Let go of the clicks, the clutter, the distractions.
Source: Arc | The Browser Company

The sleight of hand of crypto

Cory Doctorow is doing the rounds for his new book at the moment. But because he’s Cory, he’s not just phoning it in, or parroting the same lines.

Take this interview in Jacobin, for example. Yes, he’s talking about why he decided to write a story about crypto, but he’s so well informed about this stuff on a technical level that it’s a joy to read the way he explains things.

There’s this kind of performative complexity in a lot of the wickedness in our world — things are made complex so they’ll be hard to understand. The pretense is they’re hard to understand because they’re intrinsically complex. And there’s a term in the finance sector for this, which is “MEGO:” My Eyes Glaze Over. It’s a trick.

[…]

A lot of the crypto stuff starts with what a sleight-of-hand artist would do. “Alright, we know that cryptography works and can keep secrets and we know that money is just an agreement among people to treat something as valuable. What if we could use that secrecy when processing payments and in so doing prevent governments from interrupting payments?”

After this setup, the con artist can get the mark to pick his or her poison: “It will stop big government from interfering with the free market” or “It will stop US hegemony from interdicting individuals who are hostile to American interests in other countries and allow them to make transactions” or “It will let you send money to dissident whistleblowers who are being blocked by Visa and American Express.” These are all applications that, depending on the mark’s political views, will affirm the rightness of the endeavor. The mark will think, that is a totally legitimate application.

It starts with a sleight of hand because all the premises that the mark is agreeing with are actually only sort of right. It’s a first approximation of right and there are a lot of devils in the details. And understanding those details requires a pretty sophisticated technical understanding.

Source: Cory Doctorow Explains Why Big Tech Is Making the Internet Terrible | Jacobin

AI writing, thinking, and human laziness

In a Twitter thread by Paul Graham that I came across via Hacker News he discusses how it’s always safe to bet on human laziness. Ergo, most writing will be AI-generated in a year’s time.

However, as he says, to write is to think. So while it’s important to learn how to use AI tools, it’s also important to learn how to write.

In this post by Alan Levine, he complains about ChatGPT’s inability to write good code. But the most interesting paragraph (cited below) is the last one in which we, consciously or unconsciously, put the machine on the pedestal and try and cajole it into doing something we can already do.

I’m reading Humanly Possible by Sarah Bakewell at the moment, so I feel like all of this links to humanism in some way. But I’ll save those thoughts until later and I’ve finished the book.

ChatGPT is not lying or really hallucinating, it is just statistically wrong.

And the thing I am worried about is that in this process, knowing I was likely getting wrong results, I clung to hope it would work. I also found myself skipping my own reasoning and thinking, in the rush to refine my prompts.

Source: Lying, Hallucinating? I, MuddGPT | CogDogBlog

Taxing land rather than labour

I think I’ve always been somewhat of a Georgist, but perhaps didn’t know the name for it. The central tenet is that governments should be funded by a tax on land rather than labour.

There’s also the idea that this tax would replace all other taxes, which I guess is kind of the mirror of Universal Basic Income replacing all other benefits. I’m happy to be convinced on that, but already sold on the land tax idea.

Georgism, in some sense, is the idea that no one really owns land, but instead, you rent its exclusive use from everyone else through Land Value Taxes.

[…]

If I claimed to own a 1-dimensional line that ran on the ground, and that you need to step over it, or that I owned a 6-inch cube floating off the ground, and you needed to duck under it, you’d rightly think I was insane.

However, if I own a plot of land, i.e. a 2D space on the surface of the earth, it’s considered either insane (or tragically primitive) to not believe in this.

(Yes, through air rights you own 3D space, but it generally has to be above 2D land, floating cubes still seem nonsensical).

Source: Developing an intuition for Georgism | Atoms vs Bits

Image: Gautier Pfeiffer

AI and work socialisation

I've bolded what I consider to be the most important part of this article by danah boyd. It's a reflection on two different 'camps' when it comes to AI and jobs, but she surfaces an important change that's already happened in society when it comes to the workforce: we just don't train people any more.

Couple this with AI potentially replacing lower-paid jobs (where people might 'learn the ropes while working) and... well, it's going to be interesting.

While getting into what it means to be human is likely to be a topic of a later blog post, I want to take a moment to think about the future of work. Camp Automation sees the sky as falling. Camp Augmentation is more focused on how things will just change. If we take Camp Augmentation’s stance, the next question is: what changes should we interrogate more deeply? The first instinct is to focus on how changes can lead to an increase in inequality. This is indeed the most important kinds of analysis to be done. But I want to noodle around for a moment with a different issue: deskilling.

[...]

Today, you are expected to come to most jobs with skills because employers don’t see the point of training you on the job. This helps explain a lot of places where we have serious gaps in talent and opportunity. No one can imagine a nurse trained on the job. But sadly, we don’t even build many structures to create software engineers on the job.

However, there are plenty of places where you are socialized into a profession through menial labor. Consider the legal profession. The work that young lawyers do is junk labor. It is dreadfully boring and doesn’t require a law degree. Moreover, a lot of it is automate-able in ways that would reduce the need for young lawyers. But what does it do to the legal field to not have that training? What do new training pipelines look like? We may be fine with deskilling junior lawyers now, but how do we generate future legal professionals who do the work that machines can’t do?

This is also a challenge in education. Congratulations, students: you now have tools at your disposal that can help you cut corners in new ways (or outright cheat). But what if we deskill young people through technology? How do we help them make the leap into professions that require more advanced skills?

[...]

Whether you are in Camp Augmentation or Camp Automation, it’s really important to look holistically about how skills and jobs fit into society. Even if you dream of automating away all of the jobs, consider what happens on the other side. How do you ensure a future with highly skilled people? This is a lesson that too many war-torn countries have learned the hard way. I’m not worried about the coming dawn of the Terminator, but I am worried that we will use AI to wage war on our own labor forces in pursuit of efficiency. As with all wars, it’s the unintended consequences that will matter most. Who is thinking about the ripple effects of those choices?

Source: Deskilling on the Job | danah boyd

Attempting to quantify the unquantifiable

This article, which I discovered via Sentiers, discusses the rise of ‘Quantitative Aesthetics’, or putting numbers on things you like to prove other people wrong. It’s basically numbers as a shorthand for status, and once you realise it, you see it everywhere. It’s the social media-ification of all of the things.

[T]here’s something called the McNamara Fallacy, a.k.a. the Quantitative Fallacy. It is summarized as “if it cannot be measured, it is not important.” The Heller article made me reflect on how a version of it is now very present, and growing, at the grassroots of taste.

On one level, this is seen in a rise of a kind of wonky obsession with business stats in fandoms, invoked as a way to convey the rightness of artistic opinions—what I want to call Quantitative Aesthetics. (There are actually scientists who study aesthetic preference in labs and use the term “quantitative aesthetics.” I am using it in a more diffuse way.)

It manifests in music. As the New York Times wrote in 2020 of the new age of pop fandom, “devotees compare No. 1s and streaming statistics like sports fans do batting averages, championship, wins and shooting percentages.” Last year, another music writer talked about fans internalizing the number-as-proof-of-value mindset to extreme levels: “I see people forcing themselves to listen to certain songs or albums over and over and over just to raise those numbers, to the point they don’t even get enjoyment out of it anymore.”

The same goes for film lovers, who now seem to strangely know a lot about opening-day grosses and foreign box office, and use the stats to argue for the merits of their preferred product. There was an entire campaign by Marvel super-fans to get Avengers: Endgame to outgross Avatar, as if that would prove that comic-book movies really were the best thing in the world.

On the flip side, indie director James Gray, of Ad Astra fame, recently complained about ordinary cinema-goers using business stats as a proxy for artistic merit: “It tells you something of how indoctrinated we are with capitalism that somebody will say, like, ‘His movies haven’t made a dime!’ It’s like, well, do you own stock in Comcast? Or are you just such a lemming that you think that actually has value to anybody?”

Source: How We Ended Up in the Era of ‘Quantitative Aesthetics,’ Where Data Points Dictate Taste | Artnet

You can‘t ruminate and listen at the same time

David Cain at Raptitude has a post which is somewhat bizarrely entitled 10 Things I Want to Communicate to the Human Species Before I Die. The first point is about shopping trolleys, so I’m not sure how tongue-in-cheek it all is.

Anyway, without saying whether I agree or disagree with any of the other statements, I want to draw attention to the last one. Ruminating is a complete waste of time, and as someone susceptible to it I want to +1 the advice to get out of your head and listen if you’re succumbing to it.

For me, that often means listening to my iPod in the early hours of the morning while lying sleepless in bed. But it can mean listening to other people, or just your surroundings.

The tendency of the modern human is to live in their head — almost perpetually monologuing and forecasting and rehashing. This is a seldom-helpful habit most of us reinforce constantly by tumbling along with its momentum. You can weaken the grip of the ruminative mind by frequently taking a few seconds to be quiet and listen to your surroundings. Doing this reveals something interesting: when you actively use your attention for listening (or in any other intentional way) it cannot be used for more rumination. Each time you do this, the gravity of the monologuing mind weakens. If even a fraction of the population learned how to perforate their ongoing ruminative thought-mill like this, it might be a different world.
Source: 10 Things I Want to Communicate to the Human Species Before I Die | Raptitude

Arc browser is pretty nifty

I’m not going to gush as I’ve had it installed mere hours, but this article persuaded me to actually use the invite code I’d got for the Arc browser. First impressions were good enough for it to replace Brave as my default, for the time being, on my Mac Studio.

My colleague Laura always has tabs for client projects to hand, as she has a Firefox extension which separates tab groups. Arc does this quickly, seamlessly, and by default. Also, I used to have my tabs at the side of my browser and I’m not sure why or how I got out of the habit of doing so.

There are lots of other nice things about Arc which are mentioned in the review. It’s Chromium-based, so everything just works, including bringing across your bookmarks, saved passwords, and browsing history.

I realize calling Arc “the most transformative app I’ve used in decades” is a bold statement that requires a lot of support. I won’t skimp on words in this article telling you why—it’s that important and requires new ways of thinking about how you work on the Web.

[…]

If the sidebar is Arc’s most prominent interface element, Spaces is the feature that leverages it more than anything else in Arc. A Space is a collection of tabs in the sidebar. It’s easy to switch between them using keyboard shortcuts (Control-1, Control-2, etc., or Command-Option-Left/Right Arrow) or by clicking little icons at the bottom of the sidebar.

You can assign each Space a color, providing an instant visual clue for what Space you’re in. For me, Personal is a green/yellow/teal gradient, TidBITS is purple, and FLRC is blue, while my fourth space—set to hold FLRC tabs for Google Docs and Google Sheets—is yellow. Each Space can also have a custom emoji or icon that identifies it in the switcher at the bottom of the sidebar.

[…]

The most obvious part of Arc’s visual interface is its sidebar. As I said earlier, the sidebar provides access to multiple color-coded Spaces, each with its own collection of tabs. It’s easy to gloss over the importance of putting tabs in a sidebar, but that would be a mistake. Sidebar tabs aren’t simply a vertical version of tabs across the top of the browser window, they’re substantively better.

[…]

But what the sidebar really provides is a sense of comfort, of familiarity. There’s a French phrase, mise en place, that refers to setting out all your ingredients and tools before cooking so everything you need is at hand when you need it. Arc’s sidebar, when populated with the pinned tabs you use and arranged the way you think, provides that sense of mise en place. I actually want to sit down at Arc because it helps me channel my thoughts and actions toward my goals for the day.

Source: Arc Will Change the Way You Work on the Web | TidBITS

Kanban > Scrum

I spend most of my time coordinating with one other human being at work. After that, I’m coordinating with a maximum of three other people internally, and then with clients.

So take what I’ve got to say about Kanban with a pinch of salt. But I’ve worked at bigger organisations, with more fancy methodologies. Still, nothing beats having a board which shows what’s to do, doing, and done (with some tweaks perhaps for ‘feedback needed’ and ‘undead’!)

Kanban board

I’m not saying Scrum doesn’t work. I’m saying the exact opposite. Scrum does work, but it works for the same reasons Kanban does. The difference between them is that Scrum is slower and more prescriptive, and thus less adaptable (or “agile”, whatever you wanna call it).

[…]

[B]ecause Kanban focuses on tasks rather than sprint-sized batches, it pushes responsibility to the edges of the team, meaning engineers are responsible for going after the pieces of information they need to move forward.

When that happens, instead of designing features by committee, which demands a significant amount of back-and-forth discussions, decisions happen locally, and thus are easier to make.

Additionally, fewer people making decisions lead to fewer assumptions. Fewer assumptions, in turn, lead to shipping smaller pieces of software more quickly, allowing teams to truncate bad paths earlier.

Source: You don’t need Scrum. You just need to do Kanban right. | Lucas F. Costa

Image: Visual Thinkery for WAO

Just this cold beach that nourishes you

I’ve come across so much great art and artists that are either directly or obliquely protesting the coronation, monarchy, and everything the Tories stand for. Here’s one from Robert Montgomery which, I think, is actually from the queen’s jubilee.

ENGLAND IS THE FIRST LIE. ENGLAND IS A LIE INVADING KINGS TOLD YOU TO TAKE YOUR ACTUAL LAND FROM YOU. THIS LAND IS YOUR LAND FROM THE FLAT NORFOLK NIGHT TO THE BLUE CORNISH MORNING. JUST A WILD PAGAN LAND WITH NO NAME AND NO FLAG. JUST THIS COLD BEACH THAT NOURISHES YOU / JUST THE WIND ON THIS GRASSLAND THAT NOURISHES YOU / JUST THIS RAIN ON YOUR FACE IN THE MORNING IN THIS BLANK SPRINGTIME THAT NOURISHES YOU

Source: BILLBOARDS — ROBERT MONTGOMERY

On co-operative dynamics

Abi Handley (second from the left in this photo) is an inspiration to me and others in the co-op movement. It was a little surprising, therefore, when she told me a couple of months ago that she was stepping down from being a member of Outlandish. After all, she’s been with them for 12 years, almost since the beginning.

As this blog post explains, however, part of understanding the dynamics within a co-operative is to know when to take the reins, and when to step back. She’ll continue to collaborate with Outlandish, but also more than others. This is great for us, and in fact she joined WAO during our last monthly co-op day to explore opportunities.

Congrats, Abi! Onwards and upwards 💪

What I’m trying to achieve in life has changed quite a lot since I started in Outlandish at 29. I’m now 41 and got two kids, a house, built a successful business with people whom I love and respect. So what’s my next challenge? How can I push myself next?

[…]

Integrity for me is one of the most important principles I try and live by. Modelling the behaviours and values I hold dear for me is the only true way I want to be. Being able to genuinely live and breathe what I support my clients to try in terms of ways of working is essential for me to be an authentic and valuable coach & facilitator. I need to do what I say to others to try (because I believe and see that it works when everyone opts in), always and in every team I work in.

I have struggled in letting go of the role of ‘Mum’ in Outlandish, despite desperately wanting to, and that is not a solo challenge – it is also incredibly difficult to change those kinds of dynamics within any group. By me stepping down, I am modelling the need to not be at the centre of all things Outlandish, because despite trying with all our might, it is so easy to step into the safe role of looking after things when its not going so well or a challenge comes up for us. I don’t think that serves my goals, nor Outlandish’s. I think the best way we can achieve Outlandish being even more co-operative than it already is, is by me stepping back. That’s a scary thing to do for us all, but I’m going to take the risk and be excited about what might happen, for all of us.

Source: Abi is stepping down from being a member of Outlandish | Outlandish blog

Comportamento Geral

As part of the #NotMyKing protests, I came across a printmaker and artist whose work I explored further. Highly discouraged by my wife from putting up something explicitly anti-monarchy, I instead placed this from Katherine Anteney on view through the window of my home office.

It’s from a Brazilian anti-dictatorship protest song from the 1970s and roughly translated as: “Everything is good, everything is great, but what happens tomorrow, mate, when they take your carnival away?"

Seems appropriate for this weekend, anyway.

Source: Comportamento Geral | Katherine Anteney

The internet should be a place for connection, surprise, and delight

As new platforms try to imitate existing ones, it becomes more challenging for users to find unique and diverse voices (and content).

So it’s important for users, developers, and investors to encourage innovation and diversity in online spaces, instead of solely focusing on creating platforms that trap users and prioritise profit.

You know, the internet still has the potential to be a place for connection, surprise, and delight. But it requires a collective effort to resist the monopolistic tendencies of a few dominant players.

This kind of duplication isn't just a clear a failure of imagination; it is the kind of innovation that capitalism rewards. Don't make something new, make the same thing that someone else made very successful, but slightly better. To have a proven concept, after all, is to plagiarize. It's annoying to see millions of dollars thrown at making more-or-less literal dupes of internet companies that everyone is already using begrudgingly and with diminishing emotional returns. It's maybe more frustrating to realize that the goals of these companies is the same as their predecessors, which is to make the internet smaller.

[…]

The death of Google Reader is much bemoaned by bloggers like myself, many of whom believe that its end was why blogs died. That’s a beautiful revisionist history that I won’t be taking part in here. Google Reader, which was essentially a very well-designed RSS feed with a mild interactive component, died because Google decided they didn’t want to play the game in the way that its founders had said they’d play it. Those ethical foundations proved extremely easy to discard once some shiny new companies, most notably Facebook and Twitter, began raking in billions of dollars.

[…]

The reason the death of Google Reader matters, here, is that it marks a pivotal moment in the deliberate and engineered shrinking of the internet. When Google Reader died, article discovery shifted. People were no longer reading RSS feeds, finding new sites, following them, and being updated when those sites posted. Instead, they were scrolling on the endless feed of Twitter, and (at the time) Facebook, and they got whatever they got.

[…]

It is worth remembering that the internet wasn’t supposed to be like this. It wasn’t supposed to be six boring men with too much money creating spaces that no one likes but everyone is forced to use because those men have driven every other form of online existence into the ground. The internet was supposed to have pockets, to have enchanting forests you could stumble into and dark ravines you knew better than to enter. The internet was supposed to be a place of opportunity, not just for profit but for surprise and connection and delight. Instead, like most everything American enterprise has promised held some new dream, it has turned out to be the same old thing—a dream for a few, and something much more confining for everyone else.

Source: The Internet Isn’t Meant To Be So Small | Defector

It's time to strictly regulate vaping

My 16 year-old son estimates that about 70% of his year at school vapes. He might be exaggerating a bit, but there are clouds of vape fumes that accompany students leaving the local school as they walk home.

Vaping may well be safer than smoking, but we're not sure of the long-term effects, and nicotine remains an addictive substance. That's not to mention the effect on the planet of single-use vapes.

So I'm pleased that the Australian government have taken a hard line on this, and hope other countries do likewise. It's a nonsense to see tobacco hidden behind a counter and screen, while vapes are on sale next to bread and milk on supermarket shelves.

Someone vaping

Recreational vaping will be banned in Australia, as part of a major crackdown amid what experts say is an "epidemic".

[...]

Also known as e-cigarettes, vapes heat a liquid - usually containing nicotine - turning it into a vapour that users inhale. They are widely seen as a product to help smokers quit.

But in Australia, vapes have exploded in popularity as a recreational product, particularly among young people in cities.

"Just like they did with smoking… 'Big Tobacco' has taken another addictive product, wrapped it in shiny packaging and added sweet flavours to create a new generation of nicotine addicts," [Health Minister Mark] Butler said in a speech announcing reforms on Tuesday.

[...]

Research suggests one in six Australians aged 14-17 years old has vaped, and one in four people aged 18-24.

"Only 1 in 70 people my age has vaped," said Mr Butler, who is 52.

He said the products are being deliberately targeted at kids and are readily available "alongside lollies and chocolate bars" in retail stores.

Source: Australia to ban recreational vaping in major public health move | BBC News

NYC 🫶 renewable energy

New York’s Build Public Renewables Act (BPRA) demonstrates the strength of grassroots movements and the potential of publicly owned utilities to lead the way in the adoption of renewable energy.

This serves as a reminder that decentralised power structures can be more agile and responsive to the needs of the public. When communities have a say in decision-making processes, they can make bold moves towards a sustainable future, create new jobs, and ensure equitable access to clean, affordable energy. ✊

The skyline behind the Brooklyn Bridge in New York on 16 April.

New York state has passed legislation that will scale up the state’s renewable energy production and signals a major step toward moving utilities out of private hands to become publicly owned.

[…]

The Build Public Renewables Act (BPRA) will ensure that all state-owned properties that ordinarily receive power from the New York power authority (NYPA) are run on renewable energy by 2030. It will also require municipally owned properties – including many hospitals and schools, as well as public housing and public transit – to switch to renewable energy by 2035.

[…]

The passage of this first-of-its-kind law comes after years of grassroots campaigning by climate and environmental organizers in New York state.

[…]

Historically, when utilities are owned by investors, profits go to shareholders. But in publicly owned models, profits are reinvested in the utility’s operations. Rates on energy bills are also generally lower.

[…]

The newly passed law also ensures creation of union jobs for the renewable projects, guaranteeing pay rate protection, offering retraining, and making sure that new positions are filled with workers who have lost or would be losing employment in the non-renewable energy sector.

Source: New York takes big step toward renewable energy in ‘historic’ climate win | The Guardian

The 'value' of a degree

I’ve got two things to say about this article in The Economist. One is to do with alternative credentialing, and the other is to do with my first degree.

  1. The rhetoric around Open Badges in the early days was that it would mean the end of universities. Instead, they have co-opted them as 'microcredentials' in a way that unbundles chunky degrees into bitesize pieces. Universities are now more likely to work with employers on these microcredentials, which is a benefit to employability, at the expense of a rounded 'liberal' education.
  2. My first degree was in Philosophy, which most people would assume makes you entirely unemployable. The reverse is actually true, especially for knowledge work. I should imagine that in a world where we need, for example, more AI ethicists, this trend will only continue.
The value of a university education, to my mind, isn't really how much you earn specifically because of the piece of paper you earn at the end of it. Instead, it's a way to broaden your mind by (hopefully) moving away from where you grew up and encountering people who think differently.

Chart showing the (economic) "value" of different degrees

In England 25% of male graduates and 15% of female ones will take home less money over their careers than peers who do not get a degree, according to the Institute for Fiscal Studies (IFS), a research outfit. America has less comprehensive data but has begun publishing the share of students at thousands of institutions who do not manage to earn more than the average high-school graduate early on. Six years after enrolment, 27% of students at a typical four-year university fail to do so, calculate researchers at Georgetown University in Washington, dc. In the long tail, comprising the worst 30% of America’s two- and four-year institutions, more than half of people who enroll lag this benchmark.

[…]

Earnings data in Britain call into question the assumption that bright youngsters will necessarily benefit from being pushed towards very selective institutions, says Jack Britton of the ifs. In order to beat fierce competition for places, some youngsters apply for whatever subject seems easiest, even if it is not one that usually brings a high return. Parents fixated on getting their offspring into Oxford or Cambridge, regardless of subject, should take note. But there is also evidence that tackling a high-earning course for the sake of it can backfire. Norwegian research finds that students whose true desire is to study humanities, but who end up studying science, earn less after ten years than they probably otherwise would have.

Source: Was your degree really worth it? | The Economist

How to hold a 'preferendum'

I like this idea a lot. The only caveat is that we could potentially be ruled by “the will of the people” in a way that degenerates into the worst kind of populism.

However, I get the feeling that if this happens often enough, in practice it would be at worst benign, and at best a net benefit to democracy.

The preferendum is a highly promising instrument for public decision-making, especially when it is preceded by a well-designed, deliberative group of citizens representative of the public at large and succeeded by clear government action. It can be integrated within existing structures of public participation and might help bridge the gap between deliberative and representative processes.
An explanation of how it would work:
At the polling station during the next general election, you get not one but two ballot papers. The first is your usual list of candidates and their political parties. The second is something new — a document with 30 different proposals that you are invited to analyze, one after the other.

Underneath each idea it says “strongly disagree,” “disagree,” “agree,” “strongly agree,” etc. It feels like one of those online questionnaires you’ve seen many times before.

At the bottom of the form, you are invited to highlight the five proposals you care about most. Every citizen in your country on voting day would be looking at the same list and doing what you are doing in the voting booth: rating and ranking proposals. The goal is to establish a list of shared priorities.

The process looks like a referendum, a process you might’ve participated in before. But where a referendum asks you for a straight yes or no answer to a certain question, this new process — this preferendum — has a much richer interface for indicating your policy preferences. You get to translate your individual preferences into the collective priorities of your community.

Source: Democracy’s Missing Link | NOEMA

The problem with feminism is not that it has gone too far. It is that it has not gone far enough.

I listened to a podcast episode earlier this week entitled What the World of Psychology Gets Wrong About Men. After a few minutes, I considered turning it off, as I felt that the guest, Dr. John Barry, was about to stray into “men are under attack” territory.

But I kept listening, and I was wrong. It was a really balanced, well-structured conversation which pointed out how problematic the term “toxic masculinity” is when it’s applied to any behaviour we don’t like that’s exhibited by men. That’s not how the phrase originated.

This article is a review of Richard Reeves' new book. What struck me about it was the discussion of how young men’s veneration of hugely problematic figures such as Jordan Peterson, Andrew Tate, and Donald Trump is a symptom of male alienation. “Women’s lives have been recast. Men’s lives have not.”

In his new book, 'Of Boys and Men', Richard Reeves argues that the [crisis of masculinity] is structural. Society has undergone profound cultural and economic changes in the past few decades and many of them have left men—especially working-class men—disoriented and demoralized. As certain structural barriers that used to hinder women have been removed, women have proven their “natural advantage” in several areas, including in our colleges and universities. The structural disadvantages faced by men, meanwhile, have only become more entrenched during the same period. Several rounds of globalization, more outsourcing of traditionally “male” sectors like heavy industry, increasing automation, and greater workplace competition from women meant that, for many men, the economic picture has been getting bleaker by the year.

As a result, many men are struggling to fulfill their own outmoded expectations of what a man should be. “The problem with feminism, as a liberation movement, is not that it has ‘gone too far,’” Reeves writes. “It is that it has not gone far enough”—that is, it has not succeeded in replacing traditional models of masculinity with something more adequate to our current circumstances. The Western male is stuck in a culture of masculinity that is now desperately mismatched with his material reality. “Women’s lives have been recast,” Reeves writes. “Men’s lives have not.” Men have been consigned to “cultural redundancy.”

[…]

Addressing the kind of male disadvantages that Reeves catalogs does not mean ignoring or excusing inequalities that favor men over women. It’s possible, Reeves writes, to “hold two thoughts in our head at once.” Indeed, it’s urgent that we do so.

Source: Have Men Become Culturally Redundant? | Commonweal Magazine

The future of AI will always be more than six months away

A remarkably sober look at the need for regulation, transparency around how models are trained, and costs in the world of AI. It makes a really good point about the UX required for machine learning to be useful at scale.

“I have learned from experience that leaving tools completely open-ended tends to confuse users more than assist,” says Kirk. “Think of it like a hall of doors that is infinite. Most humans would stand there perplexed with what to do. We have a lot of work to do to determine the optimal doors to present to users.” Mason has a similar observation, adding that “in the same way that ChatGPT was mainly a UX improvement over GPT-3, I think that we’re just at the beginning of inventing the UI metaphors we’ll need to effectively use AI models in products.”

[…]

Augmenting work with AI could be worthwhile despite these problems. This was certainly true of the computing revolution: Many people need training to use Word and Excel, but few would propose typewriters or graph paper as a better alternative. Still, it’s clear that a future in which “we automate away all the jobs, including the fulfilling ones,” is more than six months away, as the Future of Life Institute’s letter frets. The AI revolution is unfolding right now—and will still be unfolding a decade from today.

Source: AI Can’t Take Over Everyone’s Jobs Soon (If Ever) | IEEE Spectrum

An urgency to somehow bend the algorithms

The album ‘Homework’ by Daft Punk came out in 1997 when I was 16 years old. That’s the same age as my son is now. I think it’s fair to say that it changed my life.

When I worked in HMV as a student, I used my access to the huge database to discover and order in rare Japan-only releases of Daft Punk’s music. I also discovered music that the duo behind Daft Punk, Thomas Bangalter and Guy-Manuel de Homem-Christo, released on their own labels.

I was sad when I learned that Daft Punk was to be no more, but reading this interview with Thomas Bangalter in The Guardian helps make sense. I think it’s particularly important in life not to become a caricature of yourself. For Bangalter going from scoring a film like Irréversible to ballet couldn’t be more different, really. Thomas Bangalter

Did the future lose its allure at some point? “It’s interesting,” he ponders. “You either have the content or the form. Every artist wants to create their own little revolution and try to do things that haven’t been done. That’s kind of the punk aspect. But you ultimately become a caricature of yourself once you succeed.” The point, he says, is to do something different every time. “It works in opposition. These robots, they’re like the glorification of technology. But even in 2005, when we made this film Electroma, they wanted to become human. It’s human nature – the grass is always greener on the other side.”

[…]

Where does Bangalter feel Daft Punk’s influence now? “There used to be a lot of barriers between genres of music. I was hopeful there was a possibility to break these. That was part of the message of what we did musically.” Pop tribalism is indeed over, and while that can’t be credited to Daft Punk alone – piracy, streaming and three decades of internet did their bit – his hunch was once again correct.

“In some way the world is much more polarised now, but not really musically – musically there is this ability to mix and match and create levels of conflicting aesthetics or clashing ideas. I just hope that the tolerance existing right now in music will exist more in society as well.” The defecting robot has one more warning: “Now there is an urgency to somehow bend the algorithms.”

Source: Up all night to get jeté! Thomas Bangalter on hanging up his Daft Punk helmet – and leaping into ballet | The Guardian

The web is fragmentary

I love that this article channels both Tracey Ullman’s excellent book Close to the Machine and the weird allure of spreadsheets. I have a love/hate relationship with the latter, I have to say.

The key point that this article makes, which I think a few of us realised even before the pandemic, is that the web is fragmentary by default. Huge silos of common experience will come and go, and that’s OK.

If we were to wipe the slate clean—no more platform-specific formats, no more slick UIs, no more engagement-capturing algorithms—would web users even know what to make online? The question has felt particularly acute these past few months, as Twitter users flounder to figure out where to go next, even as they still feel tethered to the increasingly broken platform. Setting aside the very real issue of building a critical mass of users on another site, the question of what to do on another site runs through many of these conversations. In an ideal world, what would a platform allow a user to do?

[...]

Creation on the web has always been about those constraints, whether technical limitations or the specific ways systems were designed. By the late ’90s, the web had grown much more participatory than the one Ellen Ullman was writing about. With a little HTML and CSS, ordinary users could create all sorts of things on the proverbial blank page—so long it was mostly text, with maybe a few low-res images or the occasional sparkly animated gif. The first decade of the 2000s saw the rise of both social networking and blogging, but even as technical capabilities were rapidly expanding, for the average user it was far less of a free-for-all than the DIY spirit of the early years. The Web 2.0 shift to user-generated content centered the user—but it was on the platforms’ terms. And in an effort to make content creation as “user-friendly” as possible, platforms were once again, after the openness of the  webring/Geocities era, building narrow pathways for users to take.

[…]

But constraints on the web today aren’t just about what our tools encourage us to do on a technical level—they’re also about what it’s like, more broadly, to use a platform. “On the old-school internet that I was on when I was a teenager, the constraints were the tools,” says [Michael Ann DeVito, a postdoctoral computing innovation fellow in the Department of Information Science at the University of Colorado Boulder]. “Could you create a hit viral video in 1996? No, we did not have the technology and infrastructure to get that video distributed. For a one-minute video, you would spend two days uploading it, and nobody would have had the connection to download it. The systems didn’t afford that kind of expression.”

[…]

The ideal solution likely lies in multiplicity: no massively scaled platform can do everything, so why continue trying to make one size only sort of fit all? Fragmenting our social and creative platforms wouldn’t just expand the ways we could share things with the world; a greater variety of affordances—and yes, constraints as well—would give us a greater range of pathways into creativity. As the current big platforms rush to copy each other (or, more to the point, copy TikTok), the idea of smaller, more varied platforms might feel antithetical; so, too, might the idea that the tech industry would be willing to invest in something that won’t endlessly grow. But the current platform malaise won’t be solved by scale and brute force. Users have many different needs, and in the next era of the web, they should be offered many different solutions.

Source: There’s No Such Thing as a One-Size-Fits-All Web | WIRED

The patchwork progress of maturity

This short post outlines in a pithy way how being an adult is so difficult: we mature in different aspects of our lives at different rates. In turn, this makes relationships difficult — especially as a parent.

AI art. Midjourny prompt: "calm, male parent consoling a crying child --aspect 16:9 --v 5 --no text words letters signatures"

We tend to think of immaturity and maturity as dichotomous, uniform states. Once you leave behind the former and enter the latter, you’re mature through and through. 

Yet, in reality, maturation follows a patchwork pattern of progress.

[...]

Maybe you react to receiving criticism with stoic equilibrium, but respond to having your birthday forgotten with perturbed petulance. 

Maybe you can give a presentation at work with perfect confidence, but can’t approach an attractive woman without sweat-inducing fear. 

[...]

As the midcentury writers Harry and Bonaro Overstreet put it, “All through life we have to take turns, as it were, being ‘parents’ to one another — because we all take turns at being children.” 

Source: Sunday Firesides: Parent the Immature in Others | The Art of Manliness

Image: Midjourney (prompt in alt text)

Fitting LLMs to the phenomena

The author of this post really needs to read Thomas Kuhn’s The Theory of Scientific Revolutions and some Marshall McLuhan (especially on tetrads).

What he’s describing here is to do with mindsets, the attempt we make to try and fit ‘the phenomena’ into our existing mental models. When that doesn’t work, there’s a crisis, and we have to come up with new paradigms.

But, more than that, to use McLuhan’s phrase, we “march backwards into the future” always looking to the past to make sense of the present — and future.

AI image. Midjourney prompt: "tree in shape of brain | ladder resting against trunk of tree --aspect 16:9 --v 5 --no text words letters signatures"

I have a theory that technological cycles are like the stages of Squid Game: Each one is almost entirely disconnected from the last, and you never know what the next game is going to be until you’re in the arena.

For example, some new technology, like the automobile, the internet, or mobile computing, gets introduced. We first try to fit it into the world as it currently exists: The car is a mechanical horse; the mobile internet is the desktop internet on a smaller screen. But we very quickly figure out that this new technology enables some completely new way of living. The geography of lives can be completely different; we can design an internet that is exclusively built for our phones. Before the technology arrived, we wanted improvements on what we had, like the proverbial faster horse. After, we invent things that were unimaginable before—how would you explain everything about TikTok to someone from the eighties? Each new breakthrough is a discontinuity, and teleports us to a new world—and, for companies, into a new competitive game—that would’ve been nearly impossible to anticipate from our current world.

Artificial intelligence, it seems, will be the next discontinuity. That means it won’t tack itself onto our lives as they are today, and tweak them around the edges; it will yank us towards something that is entirely different and unfamiliar.

AI will have the same effect on the data ecosystem. We’ll initially try to insert LLMs into the game we’re currently playing, by using them to help us write SQL, create documentation, find old dashboards, or summarize queries.

But these changes will be short-lived. Over time, we’ll find novel things to do with AI, just as we did with the cloud and cloud data warehouses. Our data models won’t be augmented by LLMs; they’ll be built for LLMs. We won’t glue natural language inputs on top of our existing interfaces; natural language will become the default way we interact with computers. If a bot can write data documentation on demand for us, what’s the point of writing it down at all? And we’re finally going to deliver on the promise of self-serve BI in ways that are profoundly different than what we’ve tried in the past.

Source: The new philosophers | Benn Stancil

Žižek on ChatGPT

Slavoj Žižek is never the easiest academic to read, and this (translated) article about ChatGPT and AI is no different. However, if you skip the bizarre introduction, I do think he makes an interesting point about people being able to blame AI’s for ambiguity and misunderstandings.

Just as we create an online avatar through which to engage the Other and affiliate with online fraternities, might we not similarly use AI personas to take over these risky functions when we grow tired, in the same way bots are used to cheat in competitive online video games, or a a driverless car might navigate the critical journey to our destination? ... We just sit back and cheer on our digital AI persona until it says something completely unacceptable. At that point, we chip in and say, ‘That wasn’t me! It was my AI.’

Therefore, the AI “offers no solution to segregation and the fundamental isolation and antagonism we still suffer from, since without responsibility, there can be no post-givenness.” Rousselle introduced the term “post-givenness” to denote “field of ambiguity and linguistic uncertainty that allows a reaching out to the other in the field of what is known as the non-rapport. It thus deals directly with the question of impossibility as we relate to the other. It is about dealing with our neighbour’s opaque monstrosity that can never be effaced even as we reach out to them on the best terms.”

[…]

“We dream outside of ourselves today, and hence that systems like ChatGPT and the Metaverse operate by offering themselves the very space we have lost due to the old castrative models falling by the wayside.” With the digitized unconscious we get a direct in(ter)vention of the unconscious - but then why are we not overwhelmed by the unbearable closeness of jouissance (enjoyment), as is the case with psychotics?

Source: ChatGPT Says What Our Unconscious Radically Represses | Sublation Magazine

Relationships and therapy-speak

I’m hugely supportive of people choosing therapies such as CBT and using language from NVC. However, it’s possible to go too far.

My wife has told me “not to use that language” with her before when she thinks that I’m using it in a way that doesn’t feel in some way natural. And it seems that it’s particularly prevalent in younger generations? Interesting article.

In recent years, therapy concepts like self-care and boundary-setting have shown up everywhere online, with Instagram accounts and other social media communities sharing mantras and advice advocating for self-actualization. TikTok therapists like Nadia Addesi and TherapyJeff offer tips for struggling with anxiety, self-esteem, and people-pleasing. “Therapy speak” — prescriptive language describing certain psychological concepts and behaviors — can be found everywhere from group chats to dating apps. Now, we have more language to advocate for ourselves and our needs, whether it be canceling plans when we feel overwhelmed or ending relationships that no longer serve us.

It’s important to be able to set boundaries and advocate for yourself. Occasionally, though, the emphasis on protecting one’s individual needs can overlook the fact that someone else is on other side of that boundary-setting. In 2019, for instance, a relationship coach’s Twitter thread offering a template for telling friends in need of support that you’re “at capacity” at the moment drew criticism for equating friendship to emotional labor. Earlier this year, a clinical psychologist’s TikTok video outlining how to break up with a friend went viral after viewers pointed out that it sounded like a missive from HR. Critics have noted that personal relationships require a touch more compassion than some of these therapeutic blueprints offer.

[…]

There are reasons a person might be tempted to overindulge in some of this self-care behavior. Conflict can be difficult, and people might think they can avoid it by asserting their needs in a way that prevents the other person from responding — by using HR language to end a friendship, for instance, or via straight-up ghosting. And by couching the behavior in therapy language, the hard “boundary” can feel more legitimate, or even virtuous.

[…]

Beyond boundary-setting and inflexibility, the proliferation of therapy speak has also inspired some people to assign labels like “toxic” and “narcissistic” to certain relationships or behaviors. Though toxic people and narcissists do exist, these armchair diagnoses don’t always accurately capture every dynamic, and being on the receiving end of this language can be destabilizing when it’s misplaced.

Source: Is Therapy-Speak Making Us Selfish?

More on why billionaires should not exist

This article frames ultra-rich people owning and using superyachts and private jets as ‘theft’ because it reduces the amount of time we’ve got to avert climate disaster.

Yes, it is.

But it’s also theft because the purchase of these yachts and jets are only possible because of the enormous sums of money stolen from workers to fund their extravagant lifestyles.

Owning or operating a superyacht is probably the most harmful thing an individual can do to the climate. If we’re serious about avoiding climate chaos, we need to tax, or at the very least shame, these resource-hoarding behemoths out of existence. In fact, taking on the carbon aristocracy, and their most emissions-intensive modes of travel and leisure, may be the best chance we have to improve our collective climate morale and increase our appetite for personal sacrifice, from individual behavior changes to sweeping policy mandates.

On an individual basis, the superrich pollute far more than the rest of us, and travel is one of the biggest parts of that footprint. Take, for instance, Rising Sun, the 454-foot, 82-room megaship owned by the DreamWorks co-founder David Geffen. According to a 2021 analysis in the journal Sustainability, the diesel fuel powering Mr. Geffen’s boating habit spews an estimated 16,320 tons of carbon-dioxide-equivalent gases into the atmosphere annually, almost 800 times what the average American generates in a year.

And that’s just a single ship. Worldwide, more than 5,500 private vessels clock in about 100 feet or longer, the size at which a yacht becomes a superyacht. This fleet pollutes as much as entire nations: The 300 biggest boats alone emit 315,000 tons of carbon dioxide each year, based on their likely usage — about as much as Burundi’s more than 10 million inhabitants. Indeed, a 200-foot vessel burns 132 gallons of diesel fuel an hour standing still and can guzzle 2,200 gallons just to travel 100 nautical miles.

Source: The Superyachts of Billionaires Are Starting to Look a Lot Like Theft | The New York Times

Negative UK growth

Growth isn't everything. However, the fact that the word 'Brexit' does not appear anywhere in this article tells you all you need to know about (a) British politics, and (b) the relationship between the government and the BBC.

IMF growth forecast showing UK and Germany in red (negative) and other countries (including Russia!) as positive

The UK is set to be one of the worst performing major economies in the world this year, according to the International Monetary Fund (IMF).

[...]

IMF researchers have previously pointed to Britain's exposure to high gas prices, rising interest rates and a sluggish trade performance as reasons for its weak economic performance.

[...]

Liberal Democrat Treasury spokesperson Sarah Olney said the forecast was "another damning indictment of this Conservative government's record on the economy".

Source: UK to be one of worst performing economies this year, predicts IMF | BBC News

The laziness of helicopter parenting

This article in The Cut by Kathryn Jezer-Morton is fantastic. There’s a tension in parenting between, on the one hand, giving your kids space to grow, be themselves, and make mistakes — and, on the other, looking out for them, being time-efficient, and avoiding the opprobrium of other parents.

Illustration of kid being followed by helicopter with a face

As my kids get older, I am learning how labor-intensive it is to teach them to be independent, and I’m beginning to think that we have the helicopter-parent/hands-off-parent binary all wrong. Maybe helicopter parenting is a form of neglect, one that might even be comparable in its harmfulness to the kind of neglect that forces kids to grow up by their own wits. The crisis of teen mental health in the wake of COVID can be explained in all sorts of ways, but a common denominator is that many teenagers feel that they have no control over their lives, which is distressing for any human. When you teach a kid to be safely independent, you give them some of that control. Denying a kid that opportunity is cruelty disguised as parental virtue – it’s beyond fucked up and dark, when you really think about it.

I also wonder if we misunderstand some of the motivations for helicopter parenting. We assume it’s an anxiety response, and I’m sure that explains a lot of it, but it’s also the path of least resistance.

[…]

“Parents who are very involved, wanting to know what their child is doing in the world — that is often considered part of helicopter parenting, but that isn’t necessarily a problem,” said [Dr. Gail Saltz, clinical associate professor of psychiatry at New York-Presbyterian Hospital and host of the How Can I Help? podcast from iHeartRadio]. “Being involved is distinct from wanting to help a child make all of their decisions. The problem is ‘I will help you do all the things. I will get involved in your conflicts. I will not let you make any mistakes.’” According to Saltz, even parents of young children should avoid approaching parenting as a troubleshooting exercise. Children become accustomed to this degree of parental involvement. The more time parents spend clearing the path for their offspring, the harder it is for children to adapt to facing obstacles on their own.

[...]

Helicopter parenting is also a way of protecting yourself from the judgment of other parents. In fact, its specter can loom even larger than actual threats to children’s safety. The off-piste vigilance of strangers can make an otherwise safe, ordinary situation spiral into conflict and defensiveness.

[...]

It doesn’t take only energy and attention to teach your kids to navigate independence safely. It takes a certain willingness to accept that someone out there might think you’re a bad parent. Allowing imagined judgment to cloud our decision making is like letting an internet comments section make our choices for us. Helicopter parenting is the manifestation of overlapping anxieties about the hazards of the world and about the opinions of other people. It’s also a product of the narcissistic delusion that our children’s (inevitable, developmentally necessary) failures are our own.

Source: Are Helicopter Parents Actually Lazy? | The Cut

Illustration: Hannah Buckman

Spaced repetition, newsletters, and book-writing

My son’s revising for exams at the moment. I used to be a teacher. One of the things that I’m trying to get across to him is the importance of ‘spaced repetition’.

This post is interesting because it takes that idea and suggests that the best newsletters are ones that help you understand key concepts by giving examples on a regular basis. The author suggests that, in this way, you can effectively write a book.

AI-generated image created by Midjourney with prompt: "a calendar with different colored dots on turquoise background, in the style of light gray and orange, personal iconography, #vfxfriday, oshare kei, hallyu, flat form --ar 142:75"
Spaced repetition is a learning technique where you embed things into memory by re-studying them on a regular basis – for example, organising flash cards so that each new concept is refreshed after a day, then a week, then a month, then a year. One of the unappreciated functions of many newsletters – to be clear, not this newsletter, but other newsletters – is to function as an ad-hoc spaced repetition system.

[…]

One thing I wonder about is which kinds of topics are better served by newsletters, and which are better served by books. Presumably if you’re creating a complex argument that requires the reader to hold in mind various ideas that build together, a book is a better fit than a newsletter.

However, many books I read strike me as having One Big Point and then a long series of examples, and in that case I suspect a newsletter dribbling out the examples might be better for reader retention.

[…]

We spend so much time reading that it always makes me sad to think that including a little more repetition would have disproportionate impact on our ability to remember and relate to the information we’ve read. In theory we could be note-taking and flash-carding after reading, but this (frankly) feels more like a chore than a pleasure. At their best, “repetitive” newsletters are one way to achieve the same goal less aversively.

Source: Spaced Repetition Through Newsletters | Atoms vs Bits

Image: Midjourney (see alt text for prompt)

Curiosity, projectories, and AI

I’ve read a lot of danah boyd’s work over the years, especially given how her research interests intersect with my work. In this long-ish post, she argues for an approach to AI driven by curiosity and the concept of ‘projectories’ (subject to guardrails).

xkcd cartoon on scenarios

I just returned from a three month sabbatical spent mostly offline diving through history and I feel like I’ve returned to an alien planet full of serious utopian and dystopian thinking swirling simultaneously. I find myself nodding along because both the best case and worst case scenarios could happen. But also cringing because the passion behind these declarations has no room for nuance. Everything feels extreme and fully of binaries. I am truly astonished by the the deeply entrenched deterministic thinking that feels pervasive in these conversations.

[…]

[...]

Even though deterministic thinking can be extraordinarily problematic, it does have value. Studying the scientists and engineers at NASA, Lisa Messeri an Janet Vertesi describe how those who embark on space missions regularly manifest what they call “projectories.” In other words, they project what they’re doing now and what they’re working on into the future in order to create for themselves a deterministic-inflected roadplan. Within scientific communities, Messeri and Vertesi argue that projectories serve a very important function. They help teams come together collaboratively to achieve majestic accomplishments. At the same time, this serves as a cognitive buffer to mitigate against uncertainty and resource instability. Those of us on the outside might reinterpret this as the power of dreaming and hoping mixed with outright naiveté.

[...]

[...]

Rather than doubling down on deterministic thinking by creating projectories as guiding lights (or demons), I find it far more personally satisfying to see projected futures as something to interrogate. That shouldn’t be surprising since I’m a researcher and there’s nothing more enticing to a social scientist than asking questions about how a particular intervention might rearrange the social order.

[...]

Source: Resisting Deterministic Thinking | danah boyd

Imaginary friends for adults

At least in my circles, there’s been a lot of talk about parasocial relationships over the last decade or so. Usually, the discussion is descriptive and simply observing the phenomenon.

In this article for The Atlantic, Arthur C. Brooks does a bit of analysis in terms of seeing parasocial relationships as a type of avoidant behaviour.

people sitting on a person's hat

The term parasocial interaction was introduced in the 1950s by the social scientists Donald Horton and R. Richard Wohl. It was the early days of home television, and they were seeing people develop an intimate sense of relationship with actors who were appearing virtually in their home. Today, the definition is much broader. After all, actors, singers, comedians, athletes, and countless other celebrities are available to us in more ways than ever before. Forming parasocial bonds has never been easier.

[…]

Although there are no exact statistics on frequency that I have found, psychologists do document cases of parasocial relationships that can go much deeper, with significant consequences. Scholars note that parasocial bonds exist on a continuum of intensity, from entertainment-social (say, gossiping about a celebrity) to intense-personal (intense feelings toward a celebrity) to borderline-pathological (uncontrollable behavior and fantasies). At the deepest level, the parasocial relationship can be dangerous, such as when a fan loses touch with reality and stalks a star, under the delusion that they have a real-life connection.

[…]

In 2021, two psychologists from York University, in Canada, found that forming parasocial bonds was strongly related to avoidant attachment. That is, people who tended to push others away in their day-to-day lives were more likely than others to relate to fictional characters, and especially to characters who are also emotionally avoidant.

Source: Parasocial Relationships Are Just Imaginary Friends for Adults | The Atlantic

The madman is the man who has lost everything except his reason

I always enjoy reading L.M. Sacasas' thoughts on the intersection of technology, society, and ethics. This article is no different. In addition to the quotation from G.K. Chesterton which provides the title for this post, Sacasas also quotes Wendell Berry as saying, “It is easy for me to imagine that the next great division of the world will be between people who wish to live as creatures and people who wish to live as machines."

While I’ve chosen to highlight the part riffing off David Noble’s discussion of technology as religion, I’d highly recommend reading the last three paragraphs of Sacasas' article. In it, he talks about AI as being “the culmination of a longstanding trajectory… [towards] the eclipse of the human person”.

AI created with Midjourney prompt: "religion of technology | manga | hypnotic --aspect 3:2 --no text words letters signatures"

The late David Noble’s The Religion of Technology: The Divinity of Man and the Spirit of Invention, first published in 1997, is a book that I turn to often. Noble was adamant about the sense in which readers should understand the phrase “religion of technology.” “Modern technology and modern faith are neither complements nor opposites,” Noble argued, “nor do they represent succeeding stages of human development. They are merged, and always have been, the technological enterprise being, at the same time, an essentially religious endeavor.”

[…]

The Enlightenment did not, as it turns out, vanquish Religion, driving it far from the pure realms of Science and Technology. In fact, to the degree that the radical Enlightenment’s assault on religious faith was successful, it empowered the religion of technology. To put this another way, the Enlightenment—and, yes, we are painting with broad strokes here—did not do away with the notions of Providence, Heaven, and Grace. Rather, the Enlightenment re-framed these as Progress, Utopia, and Technology respectively. If heaven had been understood as a transcendent goal achieved with the aid of divine grace within the context of the providentially ordered unfolding of human history, it became a Utopian vision, a heaven on earth, achieved by the ministrations Science and Technology within the context of Progress, an inexorable force driving history toward its Utopian consummation.

[…]

In other words, we might frame the religion of technology not so much as a Christian heresy, but rather as (post-)Christian fan-fiction, an elaborate imagining of how the hopes articulated by the Christian faith will materialize as a consequence of human ingenuity in the absence of divine action.

Source: Apocalyptic AI | The Convivial Society

Image: Midjourney (see alt text for prompt)

Battles over human rights are not 'culture wars'

The right of politics seems to always find ways to describe in neutral or pejorative terms (e.g. “woke”) things that threaten the (racist, sexist, homophobic) status quo.

One of these tactics is to reframe human rights battles as ‘culture wars’, as Jen Sorensen so perfectly skewers in this cartoon.

political cartoon

The term “culture war” is being thrown a lot these days.
Source: Culture War Hawks | The Nib (via Kottke)

The progress of AI art

After subscribing to ChatGPT even before version 4 came out, I subscribed to Midjourney recently. There’s a lot of concern around these things, and rightly so. But also, it’s exciting and (despite what some say) creative.

AI was arguably the most contentious topic in the world of art and design last year, and looks set to retain the same honour in 2023. Text-to-image generators have been causing controversy for a while now – but perhaps lost in all the noise is just how much they've developed in the last 12 months alone.

[…]

The images were created one year apart, with the exact same text prompt: ‘Donald Trump and Barack Obama playing basketball’. And while the first image is a nightmarish blob of barely distinguishable flesh, the second is practically photo-realistic.

Source: Mind-blowing image reveals how AI art has progressed in 1 year | Creative Bloq

Purpose, positioning, proposition

I’m just bookmarking this for next time I’m involved in a website redesign. Purpose, positioning, proposition. Right, got it.

Tentacle

Ultimately, in order to draw customers into the fold for the long-haul, you will need to offer your customers meaningful answers to the following three questions:
  1. Why are we here? [PURPOSE]

  2. How are we different? [POSITIONING]

  3. Why should you care? [PROPOSITION]

If you can do this with authenticity and relevance, then you may just be onto something powerful – even kraken-like – for your brand.

Source: Releasing the purpose kraken | ABA

Lifehouses, not churches

We used to go to church regularly. Then, as the kids grew older and sporting fixtures took over the weekend, we started going sporadically. Then, after the pandemic, we stopped going altogether. It seems we weren’t alone, as attendance, which was already declining, has fallen sharply. In fact, around 25% of Anglican churches no longer hold weekly services.

So what are we to do with these buildings? There are two massive ones at the end of our road, and a third was converted into a house a couple of decades ago. Writing in The Guardian, Simon Jenkins suggests we need to reconnect the buildings with the communities which surround them.

Church with daffodils
Throughout history these buildings have offered their publics ceremony and memorial, peace and meditation, charity and friendship, quite apart from faith. It is wrong that modern communities do not use them for such – or any other – purposes merely because religion has declined. They were built on the tithes of rich and poor alike.

[…]

It is senseless to expect the Church of England to find the money to maintain these places into the future. The solution must be to reconnect them to the surrounding communities from which the decline in worship has distanced them. They must be wholly or partly secularised. This is happening across Europe, where churches are being brought under the aegis of local councils. They can benefit from a specific – usually small – local “church tax” which, in countries such as Sweden and Germany, is voluntary. This has been the churches’ salvation.

Adam Greenfield expands on this with the concept of 'Lifehouses'. He discusses this in a Mastodon thread  with the following quotation coming from his newsletter:
The fundamental idea of the Lifehouse is that there should be a place in every three-four city-block radius where you can charge your phone when the power’s down everywhere else, draw drinking water when the supply from the mains is for whatever reason untrustworthy, gather with your neighbors to discuss and deliberate over matters of common concern, organize reliable childcare, borrow tools it doesn’t make sense for any one household to own individually, and so on, and that these can and should be one and the same place. As a foundation for collective resourcefulness, the Lifehouse is a practical implementation of solarpunk values, and it’s eminently doable.

[…]

And of course, in longer-established neighborhoods, there will often already be a building or physical site that organically serves many of these functions – the neighborhood’s naturally-arising Schelling Point, or node of unconscious coordination. Whether church, mosque, synagogue, high-school gym or public library, it will be where people instinctively turn for shelter and aid in times of trouble. What I believe our troubled times now ask of us is that we be more conscious and purposive about creating loose networks of such places, each of them provisioned against the hour of maximum need.

Source: The decline of churchgoing doesn’t have to mean the decline of churches – they can help us level up | The Guardian

Hiatus

Pause icon

I haven't posted here for a while and didn't send out a newsletter last month. While I've plenty of energy for other projects, I don't have much for Thought Shrapnel at the moment.

That may change, or it might not. Either way, I'm hitting pause here for a while.

There are just bodies, just us

Two books to add to my reading list, courtesy of this excellent review and analysis

Illness, I think, is a temporality — and not, as Susan Sontag famously posited in Illness as Metaphor, a place, where everyone holds dual citizenship between the kingdoms of sickness and health and can pass between the two. The truer statement, it seems to me, belongs to Gilda Radner, who died young of ovarian cancer: “It’s always something.” Constantly dealing with those somethings takes time, and you can no longer even pretend that your life will go along in an orderly, productive way. But does anyone’s? I’ve come to realize that the bifurcation between the sick and the well, the disabled and the able-bodied, is capitalism’s intervention. In reality, there are just bodies, just us.

Two books published this fall trouble the binary between sickness and health. Health Communism, by Beatrice Adler-Bolton and Artie Vierkant, wholly refutes the possibility of being healthy under capitalism. The Future is Disabled, by Leah Lakshmi Piepzna-Samarasinha, argues that to meet a future full of catastrophe, we need to think and act like disability activists. These books want to talk about sickness as a source of solidarity, and a way forward out of our current, very unwell state.

[…]

Separating out the well and worthy workers from the sick and unproductive surplus class is one of capitalism’s more insidious divide-and-conquer tactics. We all know the person who brags about not taking one sick day in 20 years. But if capital separates the workers from the unwell, capitalists still manage to profit from both. The state, which could sustain the sickened surplus, instead neglects them, and the private health care sector steps in to profit. Adler-Bolton and Vierkant coin the term “extractive abandonment,” (a variation on Ruth Wilson Gilmore’s description of the carceral system as “organized abandonment”) to describe how public subsidies flow to privatized facilities offering substandard care, from for-profit nursing homes to prisons. As a result, those in need of care are less likely to receive it where they could thrive, let alone exercise their self-determination. Instead, they are shunted into a “warehouse” of care, a “public-private partnership of pure immiseration.”

Source: Is Anyone Ever Well? - Lux Magazine

Smoking as an analogy for unthinking phone use

Even if, like me, you turn all but the most important notifications off, it’s easy to get used to there being something new on your phone when you’re bored. Or waiting. Or feeling anxious.

If there isn’t something new there that’s immediately accessible, it becomes more boring. I haven’t had social media apps on my phone for years, but last week I logged out of several social networks in my mobile and desktop browsers.

You’ve got to replace these things with a habit, though. So I’ve now books next to the places I tend to sit and scroll. To be honest, even playing on my Steam Deck is a better use of my time than most scrolling I do on social networks.

About twenty years later — last week — I found myself sitting at my kitchen table, mechanically upvoting and downvoting hot takes on Reddit when I realized I had been aimlessly thumbing my phone for at least twenty minutes. I was vaguely aware that I had not yet done the thing that caused me to reach for my phone in the first place, and could no longer remember what it was.

Even though I get caught up like that all the time, the nihilism of that particular twenty minutes really got to me. It was such a nothing thing to do. I said aloud what I was thinking: “That… was a total loss.”

Basically I had just aged myself by twenty minutes. Two virtual cigarettes, and not even a fading buzz to show for it. I learned nothing, gained nothing, made no friends, impacted the world not at all, did not improve my mood or my capacity to do anything useful. It was marginally enjoyable on some reptile-brain level, sure, but its ultimate result was only to bring me nearer to death. Using my phone like that was pure loss of life — like smoking, except without the benefits.

[...]

I’m not trying to make a moral appeal, only a practical one. It doesn’t necessarily follow that frivolous phone use is bad or wrong. It’s unwise, and we already know that it’s unwise. But perhaps it is as unwise as smoking. Perhaps indulging the urge to browse Reddit after checking your email is just as reckless and self-destructive as lighting up a Marlboro 100 after breakfast, and will one day be seen with all the same revulsion and taboo.

Only you know how resonant this proposition is for you. If you lose ten, twenty, or thirty minutes to frivolous phone use on a multiple-times-daily basis (I sure do), it might make sense to regard it as belonging to a much higher stratum of concern than we tend to assume. Instead of grouping it with I-probably-shouldn’t-but-who-cares sorts of behaviors, like rewatching barely-worthwhile TV shows or kicking off your shoes without untying them, perhaps it belongs with possibly-catastrophic vices like daily deep-fried lunch, road raging, or smoking.

Source: Most Phone Use is a Tragic Loss of Life | Raptitude

Living your best life

I didn’t know this guy, but for some reason clicked through to this post which appeared in my LinkedIn stream. It’s oddly affecting to see the words of someone who recently passed away doing so at peace with the world and hte place he had in it.

I wrote a book, I wrote a play and at least six thousand blog posts rife with dumb hot takes and cancellable offences. I ran a newspaper, a theatre company and a business. After a mentor invited me to work on the Copenhagen Climate Talks, I realised I could earn a living and still be on the side of the angels. And so, I helped to change laws that protect nature; I compelled people to get vaccinated during a pandemic; and I shook the hands of Prime Ministers in Paris.

I loved a woman for 27 years, but that is private and not for you.

This has been my life: art, exploring, work and love. I’m proud of it and sad that it’s shortened. I haven’t seen Asia. Will the Canucks win the Stanley Cup in the next thirty years? Will people walk on Mars?

I have a Buddhist friend who legitimately believes that every person is doing their best all of the time. I’ve finally come around to this idea. I’ve lived the best life I could.

Source: They Were All Splendid | DarrenBarefoot.com

Britain is screwed

I followed a link from this article to some OECD data which, as shown in the chart below, the UK has even lower welfare payments that the US. The economy of our country is absolutely broken, mainly due to Brexit, but also due to the chasm between everyday people and the elites.

OECD chart showing UK last in 'Benefits in unemployment, share of previous income'

On most measures, the country has the most limited welfare state of any developed country, including the United States, with the result being that working households are shouldering more risk than their peers and—as the Resolution Foundation recently found—today’s young Britons face paying far more in tax than they will ever receive back in terms of pensions and other benefits. The reverse is true of older cohorts.

There is also an unprecedented housing crisis, with young people increasingly excluded from home ownership if they cannot access family wealth. Public services are under unprecedented pressure, especially health care. Excess deaths have risen while Britain is the only country in Europe suffering from declining life expectancy.

Source: Britain Is Much Worse Off Than It Understands | Foreign Policy

Synesthetic xkcd

I’m a migraineur and there’s an overlap between that group of people and those who are synesthetes. But it turns out that my kids, who do not (yet?) suffer from migraines, also associate colour strongly with things that other people do not usually associate colour.

For example, days of the week. We’ve hard arguments over what colour ‘Monday’ is, for example. So this xkcd cartoon made me laugh.

Source: xkcd: Electron Color

Bad Bard

Google is obviously a little freaked-out by tools such as ChatGPT and their potentially ability to destroy large sections of their search business. However, it seems like they didn’t do even the most cursory checks of the promotional material they put out as part of the hurried launch for ‘Bard’.

This, of course, is our future: ‘truthy’ systems leading individuals, groups, and civilizations down the wrong path. I’m not optimistic about our future.

Google Bard screenshot

In the advertisement, Bard is given the prompt: "What new discoveries from the James Webb Space Telescope (JWST) can I tell my 9-year old about?"

Bard responds with a number of answers, including one suggesting the JWST was used to take the very first pictures of a planet outside the Earth’s solar system, or exoplanets. This is inaccurate.

Source: Google AI chatbot Bard offers inaccurate information in company ad | Reuters

Buying when the market is selling

I love this. Nintendo is increasing the salaries of its employees even though it intends to make less of a profit. Short of giving everyone ownership, this is how you invest in your people during a downturn.

Nintendo plans to raise its employees' base pay by 10% this year in the wake of inflation. Reuters reported that Nintendo plans to raise salaries even as it reduced its profit expectation for the year. Nintendo previously cut its operating profit forecast from a projected 582 billion yen to 480 billion yen ($3.6 billion).

Nintendo also amended its projected software and hardware sales. It projects that the Switch will sell 18 million units this year, as opposed to the prior forecast of 19 million. Similarly, it dropped the software sales forecast from 210 million units to 205 million. Nintendo re-affirmed that it does not currently have plan to raise prices for its consoles or games.

Source: Nintendo Will Pay Its Workers 10% More ¬ GameSpot

The party's over for office-based work

In-person working can be energising. But perhaps not every day, for most people? There’s a reason that lots of people have decided to continue to work at home after the pandemic showed them that a different approach was possible.

Take Google. The tech giant threw a massive welcome-back party complete with a Lizzo concert. Sure, it sounds cool, but unless Lizzo will one day be my manager, what does a concert have to do with getting me to my desk day after day after day? Will there be daily concerts? Everyone was isolated for two years. How does attending a concert with people I’ve never met or barely remember better connect me to the company? Being alone in a crowd would actually remind me just how few friends I have at the organization.
Source: Wake up, Corporate America: You can’t bribe, threaten, or feed people to get them back in the office | The Boston Globe

Sad Ben Affleck

I wouldn’t usually comment on celebrity culture, but I wanted to make three points here. First, are we sure that Ben Affleck isn’t depressed?

Second, why the continued assumption that being wealthy, famous, and good looking means you must be happy?

Third (and most importantly) even if you’re an actor, it doesn’t mean you’re good at dissimulation during down time. Some people just look bored when they’re bored. Like me.

It is this disconnect, you suspect, that makes Affleck so meme-able. He has everything, and yet he appears to enjoy none of it. Remember the Affleck of old, young and handsome and so cocky that you couldn’t help but take against the guy? That Affleck is gone. In his place is a man weighed down by the sheer punishing, relentless burden of life on Earth. And that, as you no doubt realise for yourself,is much more our speed.
Source: A mask of unadorned misery: how Ben Affleck became the world’s biggest meme | The Guardian

One place to rule them all?

Connor Oliver muses on the fact that, never mind the decline in ‘third places’ (or ‘third spaces’ as we’d probably call in them in the UK) there’s a decline in second places/spaces. What happens if you live and work in the same place all of the time?

It’s a real issue, and as he points out, it’s particularly acute if you’re single and don’t have kids. I’ve lived and worked from home since 2012, and from this particular house since 2014. So travel is particularly important to me, as are my kids sporting fixtures!

I don't know who coined the term "third place" and while I don't really care, my understanding is that a third place is something along the lines of a hobby group, sports club, church, barbershop, or other place you go to socialize outside of your first and second places, home and work.

[…]

My question is though, what does one do when they no longer even have a second place (work)?

[…]

A not insignificant number of us have seen our first and second place merge into one and we’ve lost much of what made our second place a second place. In some more extreme examples like mine, people have never met their coworkers in person, or even know what some of their co-workers look like.

Source: A third place? I’m not sure I even have a second anymore. | Muezza.ca

Covid and heart attacks

Curiously, I discovered this via Hacker News, which linked to an news article about it that I couldn’t access in the UK. I guess they hadn’t got their GDPR act together. So I’m sharing a link to the original journal article.

What’s interesting to me about this is that my heart hasn’t been the same since I had Covid this time last year. And sure enough, the research in this article shows that deaths from acute myocardial infarctions (i.e. heart attacks) have gone up by a third for my age group. Makes you think.

The COVID-19 pandemic has had a detrimental impact on the healthcare system. Our study armed to assess the extent and the disparity in excess acute myocardial infarction (AMI)-associated mortality during the pandemic, through the recent Omicron outbreak. Using data from the CDC's National Vital Statistics System, we identified 1 522 669 AMI-associated deaths occurring between 4/1/2012 and 3/31/2022. Accounting for seasonality, we compared age-standardized mortality rate (ASMR) for AMI-associated deaths between prepandemic and pandemic periods, including observed versus predicted ASMR, and examined temporal trends by demographic groups and region. Before the pandemic, AMI-associated mortality rates decreased across all subgroups. These trends reversed during the pandemic, with significant rises seen for the youngest-aged females and males even through the most recent period of the Omicron surge (10/2021–3/2022). The SAPC in the youngest and middle-age group in AMI-associated mortality increased by 5.3% (95% confidence interval [CI]: 1.6%–9.1%) and 3.4% (95% CI: 0.1%–6.8%), respectively. The excess death, defined as the difference between the observed and the predicted mortality rates, was most pronounced for the youngest (25–44 years) aged decedents, ranging from 23% to 34% for the youngest compared to 13%–18% for the oldest age groups. The trend of mortality suggests that age and sex disparities have persisted even through the recent Omicron surge, with excess AMI-associated mortality being most pronounced in younger-aged adults.
Source: Excess risk for acute myocardial infarction mortality during the COVID‐19 pandemic | Journal of Medical Virology 

Hiring people without degrees

This is my commentary on Bryan Alexander’s commentary of an Op-Ed in The New York Times. You’d think I’d be wholeheartedly in favour of fewer jobs requiring a degree and, I am, broadly speaking.

However, and I suppose I should write a more lengthy piece on this somewhere, I am a little concerned about jobs becoming credential-free and experience-free experiences. Anecdotally, I’ve found that far from CVs and resumes being on the decline, they’re being used more than ever — along with rounds and rounds of interviews that seem to favour, well… bullshitters.

At a broader level, I find the Times piece fitting into my peak higher education model in a quiet way.  The editorial doesn’t explicitly call for fewer people to enroll in college, but does recommend that a chunk of the population pursue careers without post-secondary experience (or credentials).  In other words, should public and private institutions heed the editorial, we shouldn’t expect an uptick in enrollment, but more of the opposite.

Which brings me to a final point. I’ve previously written about a huge change in how Americans think about higher ed. For a generation we thought that the more people get more college experience, the better. Since 2012 or so there have been signs of that national consensus breaking down. Now if the New York Times no longer shares that inherited model, is that shared view truly broken?

Source: Employers, hire more people without college degrees, says the New York Times | Bryan Alexander

Reasons for not writing

One of the reasons I continue with Thought Shrapnel is because it’s an easy way to ‘blog’ when I don’t feel like writing something from scratch.

I came up with seven reasons that I use to justify why I’m not writing. In a confusing twist of perspective, I’m also going to try and talk myself out of them by explaining to you, dear Reader, why they are bullshit.
The seven reasons?
  1. I don't have time
  2. I don't have anything interesting to say
  3. I gotta fix [X] on my site first
  4. Others have already written about this
  5. The moment for this has passed
  6. I can’t get it to sound right
  7. Nobody’s going to read it anyway
Source: 7 Reasons why I don't write | Max Böck

Should we "resist trying to make things better" when it comes to online misinformation?

This is a provocative interview with Alex Stamos, “the former head of security at Facebook who now heads up the Stanford Internet Observatory, which does deep dives into the ways people abuse the internet”. His argument is that social media companies (like Twitter) sometimes try to hard to make the world better, which he thinks should be “resisted”.

I’m not sure what to make of this. On the one hand, I think we absolutely do need to be worried about misinformation. On the other, he does have a very good point about people being complicit in their own radicalisation. It’s complicated.

I think what has happened is there was a massive overestimation of the capability of mis- and disinformation to change people’s minds — of its actual persuasive power. That doesn’t mean it’s not a problem, but we have to reframe how we look at it — as less of something that is done to us and more of a supply and demand problem. We live in a world where people can choose to seal themselves into an information environment that reinforces their preconceived notions, that reinforces the things they want to believe about themselves and about others. And in doing so, they can participate in their own radicalization. They can participate in fooling themselves, but that is not something that’s necessarily being done to them.

[…]

The fundamental problem is that there’s a fundamental disagreement inside people’s heads — that people are inconsistent on what responsibility they believe information intermediaries should have for making society better. People generally believe that if something is against their side, that the platforms have a huge responsibility. And if something is on their side, [the platforms] should have no responsibility. It’s extremely rare to find people who are consistent in this.

[…]

Any technological innovation, you’re going to have some kind of balancing act. The problem is, our political discussion of these things never takes those balances into effect. If you are super into privacy, then you have to also recognize that when you provide people private communication, that some subset of people will use that in ways that you disagree with, in ways that are illegal in ways, and sometimes in some cases that are extremely harmful. The reality is that we have to have these kinds of trade-offs.

Source: Are we too worried about misinformation? | Vox

Woke, broke, and complicated

I thought the comments about how young people’s desire for instant gratification was nothing particularly new. However, it is worth thinking about the desire for more ‘green’ options being coupled with the desire to get everything instantly. The two are somewhat in tension.

Uncertainty about the future may be encouraging impulsive spending of limited resources in the present. The young were disrupted more by covid than other generations and are now enjoying the rebound. According to McKinsey, American millennials (born between 1980 and the late 1990s) spent 17% more in the year to March 2022 than they did in the year before. Despite this short-term recovery from the dark days of the pandemic, their long-term prospects are much less good.

[…]

Youngsters’ appetite for instant gratification is also fuelling some distinctly ungreen consumer habits. The young have virtually invented quick commerce, observes Isabelle Allen of kpmg. And that convenience is affordable because it fails to price in all its externalities. The environmental benefits of eating plants rather than meat can be quickly undone if meals are delivered in small batches by a courier on a petrol-powered motorbike. Shein, a Chinese clothes retailer that is the fastest in fast fashion, tops surveys as a Gen Z favourite in the West, despite being criticised for waste; its fashionable garments are cheap enough to throw on once and then throw away. Like everyone else the young are, then, contradictory—because, like everyone else, they are only human.

Source: How the young spend their money | The Economist

The art of Battle Royale-style video games

My kids like Fortnite and Warzone. The backstory to the genre, as told in this article is really interesting, along with the realisation that it fuses storytelling and competition.

Video games broadly fall into two categories: those which, like sports, emphasize competition, and those which, like films, emphasize storytelling. Battle royale is a rare harmonious combination, a mode that encourages both dynamic, dramatic vignettes and high-stakes rivalry. At Infinity Ward, the Los Angeles-based co-developer of the Call of Duty series, which has long established the template for online competitive shooting games, PUBG was disruptive and divisive. “You could see it propagating through the office like wildfire,” Joe Cecot, the studio’s multiplayer-design director, said. “People were, like, ‘How do we make something like this? What would our twist on this be?’ ”

[…]

In the video-game medium, where players prize novelty—and, typically, not social commentary—the key to battle royale’s future may lie not in tweaking its rules but in deepening its story. In November, Activision released Warzone 2.0, which introduces some new mechanics. There’s now more than one safe circle, so players are herded into pockets of refuge, and it’s possible to interrogate downed opponents, making them reveal the position of their teammates. These embellishments add subtle points of difference, but it’s unlikely that they’ll energize the form. “Battle royale will now always be a part of the tool kit, in the same way that we’re never not going to have the fifty-two-card deck,” Lantz said. “But there’s not a lot of people making new games for the fifty-two-card deck. When a thirteen-year-old hears that there’s a new battle-royale game coming out today, it’s already a little bit boring. Like, you know, boomer stuff.”

Source: How “Battle Royale” Took Over Video Games | The New Yorker

Cambrian governance models

I think it’s fair to say that this article features ‘florid prose’ but the gist is that we should want society to be as complex as possible. This allow innovation to flourish and means we can solve some of the knottiest problems facing our world.

However, we’re hamstrung by issues around transnational governance, and particularly in the digital realm.

To summarise, we are traversing an epochal change and we lack the institutional capacity to complete this transformation without imploding. We could well fail, and the consequences of failure at this juncture would be catastrophic. However, we can collectively rise to the challenge and an exciting assemblage of subfields is emerging to help. We can fix the failed state that is the Internet if we approach building tech with institutional principles, and an Internet that delivers on its cooperative promise of deeper, denser institutional capacity is what we need as a planetary civilisation.

We don’t need a worldwide technical U.N. to figure this out. Rather, we need transnational topic-specific governance systems that interact with one another wherever they connect and overlap but that do not control one another, and that exercise subsidiarity to one another as well as to more local institutions. Yes, it will be a glorious mess — a Cambrian mess — but we will be collectively smarter for it.

Source: The Internet Transition | Robin Berjon

Tax and/or eat the rich

I’m essentially just bookmarking this in case I think that I’ve misremembered the astounding difference in global wealth between the top 1% and bottom 90% mentioned in this article

The report said that for every $1 of new global wealth earned by a person in the bottom 90%in the past two years, each billionaire gained roughly $1.7m. Despite small falls in 2022, the combined fortune of billionaires had increased by $2.7bn a day. Pandemic gains came after a decade when both the number and wealth of billionaires had doubled.
Source: Call for new taxes on super-rich after 1% pocket two-thirds of all new wealth | The Guardian

Logging off from AI?

An interesting and persuasive article from Lars Doucet who considers the ways in which AI spam might mean that people retreat from ‘open sea’ social networks (including gaming / dating ones) to more niche areas.

I don’t think there’s anything particularly wrong with interacting with AIs in ways that include emotion. But it’s a solipsistic existence, and perhaps not one that leads to human flourishing.

What happens when anyone can spin up a thousand social media accounts at the click of a button, where each account picks a consistent persona and sticks to it – happily posting away about one of their hobbies like knitting or trout fishing or whatever, while simultaneously building up a credible and inobtrusive post history in another plausible side hobby that all these accounts happen to share – geopolitics, let's say – all until it's time for the sock puppet master to light the bat signal and manufacture some consensus?

What happens when every online open lobby multiplayer game is choked with cheaters who all play at superhuman levels in increasingly undetectable ways?

What happens when, from the perspective of the average guy, “every girl” on every dating app is a fiction driven by an AI who strings him along (including sending original and persona-consistent pictures) until it’s time to scam money out of him?

What happens when comments sections on every forum gets filled with implausibly large consensus-building hordes who are able to adapt in real time and carefully slip their brigading just below the moderator’s rules?

I mean, to various degrees all this stuff is already happening. But what happens when it cranks up by an order of magnitude, seemingly overnight?

What happens when most “people” you interact with on the internet are fake?

I think people start logging off.

Source: AI: Markets for Lemons, and the Great Logging Off | Fortress of Doors

Retro audio player

Adam Procter shared this with me recently, after witnessing the trials and tribulations of upgrading an iPod Classic. It’s pretty awesome, I have to say, but I don’t think I’m quite at the stage of custom PCBs quite yet…

Inspired by 1980s tape recorders, this audio player was designed with ease of use and accessibility in mind. Despite its nostalgic appearance, it packs modern hardware. Powered by the ESP32, it plays music and audiobooks from a micro SD card, either on its internal speaker or though a headphone jack. A 2.8“ IPS screen and mechanical buttons make up the simplistic user interface. The software is built around the ESP32-audioI2S library by GitHub user Schreibfaul1.
Source: DIY Retro Audio Player | Hackaday.io

Paying less attention to the attention economy

This is a reply from John Udell, a very smart guy I’ve interacted with a few times over the years. He wisely doesn’t link to the post he’s critiquing, primarily because (ironically) it would give more attention to someone he’s suggesting has a problem weaning themselves off the attention economy.

Udell talks about the ‘sweet spot’ on Twitter having been between 200 and 15,000 followers. The most I had was around 14,500 which seemed pretty awesome for a few years. I did notice that number not going up much after 2014.

But, as he says, the point about saying things online if you’re a regular person is hanging out and discussing things. There are absolutely times when you want to shout about things and make a difference, but that’s what boosting/retweeting is for, right?

If you occupy a privileged position in the attention economy, as Megan McArdle does now, and as I once did in a more limited way, then no, you won’t see Mastodon as a viable replacement for Twitter. If I were still a quasi-famous columnist I probably wouldn’t either. But I’m no longer employed in the attention economy. I just want to hang out online with people whose words and pictures and ideas intrigue and inspire and delight me, and who might feel similarly about my words and pictures and ideas. There are thousands of such people in the world, not millions. We want to congregate in different online spaces for different reasons. Now we can and I couldn’t be happier. When people say it can’t work, consider why, and who benefits from it not working.
Source: Of course the attention economy is threatened by the Fediverse | Jon Udell

Async work isn't just cancelling meetings

I thought this response by Becky Kane to Shopify publicly announcing that it’s cancelling 76,500 hours of meeting was not only a great example of identifying the issues underneath a problem, but also a masterclass in product marketing.

The Async Newsletter is from Twist, which positions itself as a collaborative messaging app for teams that doesn’t distract you. I’ve been meaning to try it for a while, and even more so now that I know how much they think about these things.

Kane’s point is that it’s easy to say ‘no meetings’ but this doesn’t provide another option for people. After all, if people are used to calling a meeting to share information, or because they’re not feeling ‘aligned’ as a team, or because they need to make a decision, how do they now do this?

As employees returned from their holiday break, the company’s leadership fired the first shot in an all-out war on meetings, and boy was it a doozy:

Effective immediately, all recurring meetings with more than 2 people would be automatically removed from company calendars – canceling 76,500 hours of meetings per year in one fell swoop.

This hard-line anti-meeting policy — designed to give people back time for focused work – also included:

  • A 2-week cooling off period before any meetings are put back on the schedule
  • Moving all large meetings of 50+ people to the same 6-hour window on Thursdays
  • Reupping a rule that no meetings at all can be held on Wednesdays
Given the masthead of this newsletter, you’d think I would applaud the move. But I’m skeptical it will have the kind of lasting impact on employee engagement and productivity that Shopify’s leadership team is aiming for.

You may have noticed that Shopify “reupped” No Meeting Wednesdays. A friend of mine who worked there told me that it was an open secret that everyone scheduled meetings on Wednesdays anyway because it was the only time that wasn’t already taken up with meetings.

Without bigger, deeper changes to company culture and operations, there’s no reason to think the same meeting creep won’t simply happen again. Or worse, that communication will be pushed into even more fragmented, distracting, and ultimately unproductive forms.

Sixteen hours on, eight hours off.

I do like posts about people’s routines and, in fact, I contributed to a website which became a book of them! This particular one is by Warren Ellis, who seems to live quite a solitary existence, at least when he’s writing.

Being alone can bring an intensity to one’s work, I’ve found, which may or may not be relevant or welcome given on what you do for a living. Given Ellis is a writer of graphic novels, novellas, and screenplays, it’s absolutely fitting, I guess.

I work until I get hungry. I’ll watch something – a tv episode, part of a film – while eating lunch, which is either cold meats and flatbreads or salmon with vegetables or something with eggs. I keep it simple and repeatable. Also I have constant access to eggs, as mentioned above. At some point in the afternoon I’ll have an apple with walnuts and cheese. Eight espressos a day, two litres of water. I mention the food because the one thing productivity notes tend to forget is that thinking burns calories, and the first things to kill thinking are thirst and having no calories available to burn.
Source: Morning Routine And Work Day, January 2022 | Warren Ellis

Getting serious

This is a great article by Katherine Boyle that talks about the lack of ‘seriousness’ in the USA, but also considers the wider geopolitical situation. We’re living at a time when world leaders are ever-older, and people between the ages of 18 and 29 just don’t have… that much to do with their time?

The Boomer ascendancy in America and industrialized nations has left us with a global gerontocracy and a languishing generation waiting in the wings. Not only does extended adolescence—what psychologist Erik Erikson first referred to as a “psychosocial moratorium” or the interim years between childhood and adulthood— affect the public life of younger generations, but their private lives as well.

[…]

In many ways, the emergence of extended adolescence was designed both to coddle the young and to conceal an obvious fact: that the usual leadership turnover across institutions is no longer happening. That the old are quite happy to continue delaying aging and the finality it brings, while the young dither away their prime years with convenient excuses and even better TikTok videos.

[…]

So in 2023, here we are: in a tri-polar geopolitical order led by septuagenarians and octogenarians. Xi Jinping, Joe Biden and Vladimir Putin have little in common, but all three are entering their 70s and 80s, orchestrating the final acts of their political careers and frankly, their lives. That we are beholden to the decisions of leaders whose worldviews were shaped by the wars, famines, and innovations of a bygone world, pre-Internet and before widespread mass education, is in part why our political culture feels so stale. That the gerontocracy is a global phenomenon and not just an American quirk should concern us: younger generations who are native to technological strength, modern science and emerging cultural ailments are still sidelined and pursuing status markers they should have achieved a decade ago.

Source: It’s Time to Get Serious | The Free Press

On the economic pressures of Covid

This is data from the USA, but the picture I should imagine might be true on a smaller scale in the UK. The difference, I guess, not being an economist, is that we still have a larger state over here and some vestiges of union action.

So how this plays out in terms of the pressure it puts on the workforce, and especially those employed directly or indirectly by the government, is different. It's why we're having lots of strikes right now.

It strikes me as extremely disingenuous of the UK government to be spinning the current crisis as being about them trying to avoid 'embedding 10% inflation' in the economy. It's not like we're going to see a reduction in prices if inflation levels decrease. People will still have had a real-terms pay cut.

As an historian by training, I can't help but think about the parallels with the Black Death and the collapse of feudalism due to the lack of workers...

Chart showing labour force shortfall in US

Federal Reserve chair Jerome Powell struck a particularly somber note at his press conference earlier this week when he mentioned that one reason the labor market is so tight right now is that many workers died from COVID-19.

The big picture: Economists have theorized for a while about the impact of COVID deaths on the labor market. Now, research has started to emerge and key public figures like Powell are starting to talk about it explicitly.

Source: Fed chair Powell on the U.S. labor shortage: COVID, retirements, missing immigrants | Axios

Facial recognition and the morality police

As this article points out, before 1979 removal of the traditional hijab was encouraged as part of Iran’s modernisation agenda. Once a theocracy came to power, however, the ‘morality police’ started using any means at their disposal to repress women.

Things have come to a head recently with high-profile women, for example athletes, removing the hijab. It would seem that the Iranian state is responding to this not with discussion, debate, or compassion, but rather with more repression -this time in the form of facial recognition and increasingly levels of surveillance.

We should be extremely concerned about this, as once there is no semblance of anonymity anywhere, then repression by bad actors (of which governments are some of the worst) will increase exponentially.

After Iranian lawmakers suggested last year that face recognition should be used to police hijab law, the head of an Iranian government agency that enforces morality law said in a September interview that the technology would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.” Individuals could be identified by checking faces against a national identity database to levy fines and make arrests, he said.

Two weeks later, a 22-year-old Kurdish woman named Jina Mahsa Amini died after being taken into custody by Iran’s morality police for not wearing a hijab tightly enough. Her death sparked historic protests against women’s dress rules, resulting in an estimated 19,000 arrests and more than 500 deaths. Shajarizadeh and others monitoring the ongoing outcry have noticed that some people involved in the protests are confronted by police days after an alleged incident—including women cited for not wearing a hijab. “Many people haven’t been arrested in the streets,” she says. “They were arrested at their homes one or two days later.”

Although there are other ways women could have been identified, Shajarizadeh and others fear that the pattern indicates face recognition is already in use—perhaps the first known instance of a government using face recognition to impose dress law on women based on religious belief.

Source: Iran to use facial recognition to identify women without hijabs | Ars Technica

U.S. Army Corps releases cat calendar

Well, this is fun! More whimsy at work, please.

Gigantic cats using hydropower dams as scratching posts are just some of the pawed pinups in a 2023 calendar released by Pacific Northwest-based U.S. military personnel.

The photoshopped felines are part of an effort by the Portland, Ore., branch of the U.S. Army Corps of Engineers to portray their work in an entertaining light.

Engineering isn’t always exciting, so the district tries to have a fun social media presence, public affairs specialist Chris Gaylord told NBC’s Today.com on Monday.

“I will use levity whenever I can; that’s what people enjoy,” Gaylord said. “That’s not us dumbing things down. That’s us respecting and not taking for granted the attention of our publics.”

Source: Yes, a branch of the Army Corps of Engineers did make a cat calendar | Stars and Stripes

Getting your book published in 2023

This, via Warren Ellis, is a useful resource. I also like that its creator, Jane Friedman, has made it available to be downloaded, printed, and shared “no permission required” (although I wish she’d explicitly used a CC0 license)

When I shared this with a friend, they pointed out that it doesn’t include the ‘kickstarter’ kind of model. While Friedman points out that the chart is primarily for ‘trade press’ (i.e. books with a general audience) there’s a whole different type of approach, which I kind of pioneered a decade ago with OpenBeta and which is more easily achieved these days with platforms such as Leanpub.

(click on image to download PDF)

One of the biggest questions I hear from authors today: Should I traditionally publish or self-publish?

This is an increasingly complicated question to answer because:

  1. There are now many varieties of traditional publishing and self-publishing, with evolving models and diverse contracts.
  2. It’s not an either/or proposition; you can do both. Many successful authors, including myself, decide which path is best based on our goals and career level.
Thus, there is no one path or service that’s right for everyone all the time; you should take time to understand the landscape and make a decision based on long-term career goals, as well as the unique qualities of your work. Your choice should also be guided by your own personality (are you an entrepreneurial sort?) and experience as an author (do you have the slightest idea what you’re doing?).
Source: The Key Book Publishing Paths: 2023–2024

Good writing is good writing

I’ve seen all of the Star Wars films at least once. I’m not big into sci-fi or fantasy, but on the recommendation of seemingly everyone (including my son) I’ve started watching Andor on Disney+.

I’m not even half-way through but it really is excellent, with no ridiculously CGI, just a believable world and an excellent storyline.

Andor largely eschews many Star Wars staples, such as wacky creatures and funny droids, focusing instead on the realities of power and violence. Fantasy author Erin Lindsey, who worked for many years as a UN aid worker, found the show’s depiction of politics to be completely believable. “I think there are clearly people on the writing team who are students of spy novels like [those by] John le Carré and who are students of politics and students of history, who are really looking at how revolution has happened here on Earth and what that looks like,” she says.

Despite its high quality, Andor‘s ratings have lagged behind Star Wars shows like Obi-Wan Kenobi and The Mandalorian. Geek’s Guide to the Galaxy host David Barr Kirtley hopes that Andor will attract a larger audience in season 2. “It’s so good,” he says. “It deserves higher ratings than it’s gotten so far. And I definitely want to see more shows like this. This is the kind of show—especially the kind of Star Wars show—that I’ve been pining after for all these years. So please let’s all just give it as much support as we can.”

Source: ‘Andor’ Is a Master Class in Good Writing | WIRED

Update your profile photo at least every three years

I think this is good advice. I try to update mine regularly, although I did realise that last year I chose a photo that was five years old! I prefer ‘natural’ photos that are taken in family situations which I then edit, rather than headshots these days.

Unfortunately, some people will make assumptions about you based on a photo, and those impressions paint a picture of how we perceive someone. “The first thing people encounter is the delta” between how you represent yourself in a photo and how you look in person, Marion Dino, a retired human resources executive and career coach explains. “You want to convey that you are trustworthy. Most people aren’t intentionally judging, but we all have unconscious biases, and you leave yourself open to the interpretation of being less than honest if you don’t represent yourself accurately.” Most of the time, a résumé doesn’t include a profile photo, but “recruiters do look at LinkedIn and other social media platforms,” Dino says. “You don’t want to leave the impression that you aren’t authentic.”

[…]

And how often should you update your photos, so they don’t give someone pause when you meet in person? “Profile pictures should be updated every three years unless there is a significant change in appearance. Then, they should be taken sooner.” So, the next time you scroll LinkedIn, log in to a Zoom meeting, or even send an email with a thumbnail profile photo, think about how you want to be perceived, and don’t hesitate to use a picture that fully represents who you are.

Source: How Often Should You Update Your Profile Photos? | WIRED

Let's make private schools help pay for state schools

I’m delighted to hear about this and I hope the vote passes. It’s a farce that place of privilege should gain tax breaks and have ‘charitable status’. As I’ve said many times before, opting out of state education and the NHS should be, either impossible or ridiculously expensive.

Labour will attempt to force a binding vote on ending private schools’ tax breaks and use the £1.7bn a year raised from this to drive new teacher recruitment.

The motion submitted by Keir Starmer’s party for the opposition day debate on Wednesday is drafted to push the charitable status scheme that many private schools enjoy to be investigated, as the party attempts to shift the political focus on to education.

[…]

Labour will hope the motion will force the government to make its MPs vote down an issue, rather than ignoring the process. A Labour source has previously said: “Conservative MPs voting against our motion are voting against higher standards in state schools for the majority of children in our country.”

Source: Labour look to force vote on ending private schools’ tax breaks | The Guardian

Chameleon e-ink car

Most of the things at the annual CES tech show in Las Vegas every year are either pointless (at least to me) or in some way enabling of ever-greater surveillance.

However, this e-ink car really caught my imagination. I’m a big fan of both e-ink (it’s easy on the eyes) and customisation, so this is really in my sweet spot. I did wonder for half a second about a whole movie plot using e-ink for a getaway car, and then I realised that every car these days has a GPS chip and SIM card in it…

Introduced by Arnold Schwarzenegger, BMW’s i Vision Dee caught the attention with its E-ink outer skin, which can change colour in an instant. Don’t expect that on a car you can buy any time soon but it also has a head-up display projected across the full width of the windscreen, which will be available from 2025.
Source: Chameleon cars, urine scanners and other standouts from CES 2023 |  The Guardian

Level 3 busy-ness

Discovered via Kottke, this ‘seven levels of busy’ makes me realise that I don’t really want to be beyond Level 3 most weeks. Level 4 is OK on occasion.

It’s over a decade since I’ve experienced Levels 5 and 6, I reckon. And I’ve never let things get to Level 7.

Level 3: SIGNIFICANT COMMITMENTS I have enough commitments that I need to keep track of them in a tool because I can no longer organically triage. My calendar is a thing I check infrequently, but I do check it to remind myself of the flavor of this particular day.
Source: The Seven Levels of Busy | Rands in Repose

Photo: Dan Freeman

Nick Cave's plans for 2023

The artist Nick Cave has a (newsletter? blog?) called The Red Hand Files in which he answers questions from his fans. Somebody pointed me towards a recent post where he talks about his aim to write and record a new album in 2023.

I love the way he talks about the creative process, and how mysterious it is.

My plan for this year is to make a new record with the Bad Seeds. This is both good news and bad news. Good news because who doesn’t want a new Bad Seeds record? Bad news because I’ve got to write the bloody thing.

[…]

Writing lyrics is the pits. It’s like jumping for frogs, Fred. It’s the shits. It’s the bogs. It actually hurts. It comes in spurts, but few and far between. There is something obscene about the whole affair. Like crimes that rhyme. I hope this doesn’t last long. I’m actually scared. But it always does. Last long. To write a song. You hope to God there is something left. You are bereft. I’m going to stop this letter. It isn’t making things better. It’s like flogging a dead horse. Worse. It’s a hearse. A hearse of dead verse. Dead, Fred. Dead.

Source: Nick Cave - The Red Hand Files - Issue #217

Walking around like Lionel Messi

I didn’t get a chance to read this excellent article in The New Yorker about Lionel Messi until today. It was published the week leading up to the World Cup Final, which of course Argentina won, making Messi possibly the greatest player of all time (behind Pele? RIP.)

What I like about it is that it shows that ‘work’ doesn’t always look like running around the place looking ‘busy’. In fact, the greatest people at a given thing are usually involved in the background while people are concentrating solely on the foreground.

Messi is soccer’s great ambler. To keep your eyes fixed on him throughout a match is both spellbinding and deadly dull. It is also a lesson in the art and science of watching a soccer match. If you ask any astute observer—an experienced coach or player or tactically tuned-in analyst—how to understand the game, they will advise you to take your eyes off the ball. There may well be an analogous precept, with a German name, in philosophy or art history or mechanical physics. The idea is this: to apprehend the main thrust of the narrative, to really wrap your mind around what’s going on, you must shift your focus from the foreground to the background.

[…]

[I]f you happen to be watching a match featuring Leo Messi, you’ll notice that something on the order of eighty-five per cent of the time, he can be found off the ball, strolling and dawdling and looking mildly uninterested. It is the kind of behavior associated with selfish players, prima donnas who expend no effort on defense and bestir themselves only when goal-scoring opportunities arise. Messi, of course, is one of the most prolific scorers of all time, with a career total of nearly eight hundred goals in club and international competition. His penchant for walking is not a symptom of indolence or entitlement; it’s a practice that reveals supreme footballing intelligence and a commitment to the efficient expenditure of energy. Also, it’s a ruse—the greatest con job in the history of the game.

A famous aphorism, usually attributed to the Spanish manager Vicente del Bosque, sums up the subtly visionary play of the midfielder Sergio Busquets this way: when you watch the game, you don’t see Busquets—but when you watch Busquets, you see the whole game. Something related might be said about the great Argentinean: when you watch Messi, you watch him watching the game. Another manager, Manchester City’s Pep Guardiola, who coached Messi for four years at Barcelona, has described his walking, especially in the early stages of a game, as form of cartography—an exercise in scanning and surveying, taking the measure of the defense, noticing where the vulnerabilities lie, and calculating when and how opportunities might be seized. “After five, ten minutes, he’ll have a map in his eyes and in his brain,” Guardiola has said. “[He’ll] know exactly what is the space and what is the panorama.”

Source: The Genius of Lionel Messi Just Walking Around | The New Yorker

Spreading joy in 2023

I love the idea behind this list of 52 acts of kindness. Realistically, number 14, 16, and 38 are the ones I’m likely to do (because I already do them!)

14. Pay a compliment “You’re looking nice,” is good. “You have great skin” or “I love your shoes” is better. Someone once told me I had “cute ears” and I treasured it for years.

16. Make a mixtape Give someone a curated Spotify or YouTube playlist of stuff you think they would like.

38. Drive kindly If you’re sure it’s safe, flash your lights or wave your hand at someone waiting to cross the road in front of you.

Source: 52 acts of kindness: how to spread joy in every week of 2023 | The Guardian

Preparation is everything

I used to have a quotation on the wall of my classroom when I was a teacher that has been attributed to various different people, but reads: “Opportunity is missed by most people because it is dressed in overalls and looks like work."

The point of the quotation is that to have any kind of success in life that isn’t luck-dependent, you have to be ready. That looks different depending on the situation, but (for me at least) involves thinking about different scenarios, what could play out, etc.

This post, found via HN, is from a developer thinking about software projects. But the point he makes is universal: preparing effectively means that you can get on and focus on delivering without having to keep stopping.

Motivation is the willingness to want to do something. This is of course an important first step in potentially being productive. We are better at things we want to do, rather than things we’re forced to do by others, or by our own self discipline.

But motivation is nothing more than that. It helps us start, but it doesn’t mean we’ll finish, or even produce half of what we want to. Even when we are motivated, if we don’t make enough progress our motivation has a way of epically [sic] disappearing.

[…]

Knowing how to make progress and making progress are two different things, but we often conflate them and treat them as the same thing. We basically jump into the task and start.

[…]

Productivity doesn’t come from feeling motivated, it comes from knowing what you need to do in enough detail that you can complete it without continually stopping and losing your focus.

Source: To Be Productive, Be Prepared | Martin Rue

Image: Brett Jordan

This is 2023

We're back! Happy New Year!

Over the break, this site moved to a managed hosting platform, which should mean less downtime 🎉

That was 2022

Ice on a window

Inspired by Warren Ellis closing his LTD site until 2023, this is a notification that Thought Shrapnel is done for the year!

I may send out a 'best of the year' newsletter (sign up here!) if I don't end up in a mince pie coma, but either way thanks for your attention and appreciation. See you in January!


Image by Kelly Sikkema

'Nightfall' meteorite contains new and unusual minerals

OK, so it’s not Vibranium, but discovering potentially three new minerals in a meteorite found in Somalia is pretty exciting! I wonder what new substances we’ll find as we further explore space, and what uses we’ll discover for them?

The meteorite, the ninth largest recorded at over 2 metres wide, was unearthed in Somalia in 2020, although local camel herders say it was well known to them for generations and named Nightfall in their songs and poems.

[…]

Dr Chris Herd, a professor in the department of earth and atmospheric sciences and the curator of the collection, said that while he was classifying the rock he noticed “unusual” minerals. Herd asked Andrew Locock, the head of the university’s electron microprobe laboratory, to investigate.

[…]

Similar minerals had been synthetically created in a lab in the 1980s but never recorded as appearing in nature, Herd said, adding that these new minerals could help understand how “nature’s laboratory” works and may have as yet unknown real-world uses. A third potentially new mineral is being analysed.

Source: Researchers discover two new minerals on meteorite grounded in Somalia | The Guardian

No benefits to post-Brexit deregulation

Coupled with the pandemic and the energy crisis, Brexit is absolutely destroying the UK at the moment. If you haven’t watched The Brexit Effect made by the Financial Times, then you really, really should.

This article in the New Statesman argues that the deregulation touted as a huge benefit of Brexit isn’t wanted or needed by most UK businesses. It’s the red tape added by being outside the EU single market that’s the problem.

Most businesses have no interest or understanding of the government’s plans for post-Brexit deregulation. And a majority of companies could not name a single EU law that they would change or remove to become more profitable, according to findings shared exclusively with the New Statesman by the British Chambers of Commerce.

[…]

In a new survey of 938 businesses, made up largely of SMEs (and therefore representative of the UK economy), just 14 per cent specified an EU regulation they would remove; 58 per cent of firms had no preference over the amendment or removal of any EU regulation. Half said that deregulation is either a low priority or not a priority at all.

Source: Exclusive: Most UK businesses see no benefit in post-Brexit deregulation | New Statesman

Study shows no link between age at getting first smartphone and mental health issues

Where we live is unusual for the UK: we have first, middle, and high schools. The knock-on effect of this in the 21st century is that kids aged nine years old are walking to school and, often, taking a smartphone with them.

This study shows that the average age children were given a phone by parents was 11.6 years old, which meshes with the ‘norm’ (I would argue) in the UK of giving kids one when they go to secondary school.

What I like about these findings are that parents overall seem to do a pretty good job. It’s been a constant battle with our eldest, who is almost 16, to be honest, but I think he’s developed some useful habits around technology.

Parents fretting over when to get their children a cell phone can take heart: A rigorous new study from Stanford Medicine did not find a meaningful association between the age at which kids received their first phones and their well-being, as measured by grades, sleep habits and depression symptoms.

[…]

The research team followed a group of low-income Latino children in Northern California as part of a larger project aimed to prevent childhood obesity. Little prior research has focused on technology acquisition in non-white or low-income populations, the researchers said.

The average age at which children received their first phones was 11.6 years old, with phone acquisition climbing steeply between 10.7 and 12.5 years of age, a period during which half of the children acquired their first phones. According to the researchers, the results may suggest that each family timed the decision to what they thought was best for their child.

“One possible explanation for these results is that parents are doing a good job matching their decisions to give their kids phones to their child’s and family’s needs,” Robinson said. “These results should be seen as empowering parents to do what they think is right for their family.”

Source: Age that kids acquire mobile phones not linked to well-being, says Stanford Medicine study | Stanford Medicine

Four forces that constrain our actions

‘Pathetic Dot’ is not a great name for a theory, and the diagram on the Wikipedia page isn’t the best, but Christina Bowen reminded me of it during an introductory conversation yesterday.

I can’t find it again quickly, but this also reminds me of a discussion I saw about how credit scores can exert almost as much unseen social control over people in the West as very visible social control mechanisms in more authoritarian countries.

The pathetic dot theory or the New Chicago School theory was introduced by Lawrence Lessig in a 1998 article and popularized in his 1999 book, Code and Other Laws of Cyberspace. It is a socioeconomic theory of regulation. It discusses how lives of individuals (the pathetic dots in question) are regulated by four forces: the law, social norms, the market, and architecture (technical infrastructure).

Lessig identifies four forces that constrain our actions: the law, social norms, the market, and architecture. The law threatens sanction if it is not obeyed. Social norms are enforced by the community. Markets through supply and demand set a price on various items or behaviors. The final force is the (social) architecture. By that Lessig means “features of the world, whether made, or found”; noting that facts like biology, geography, technology and others constrain our actions. Together, those four forces are the totality of what constrains our action, in fashion both direct and indirect, ex post and ex ante.

[…]

The theory can be applied to many aspects of life (such as how smoking is regulated), but it has been popularized by Lessig’s subsequent usage of it in the context of the regulation of the Internet.

Source: Pathetic dot theory | Wikipedia

French views of Brexit

It’s always interesting reading articles from foreign newspapers about the state of the UK. I wish it were true that conversations about Brexit and the damage it’s done were on the table. But I just don’t see it.

Brexit is once again at the heart of the British debate. Experts and the media are openly criticizing its negative effects on the UK economy. On the BBC's flagship politics show Question Time and on the popular LBC talk radio station, the audience is increasingly critical of the UK's divorce from the European Union. According to a poll by the YouGov institute published on November 17, 56% of respondents believe that the country "was wrong to leave the EU" on December 31, 2020.

[…]

The presentation of an austerity budget by new Prime Minister Rishi Sunak's government on November 17 in an attempt to restore the country's financial credibility (after the disastrous episode of the Liz Truss "mini-budget") has loosened tongues. On this occasion, the Office for Budget Responsibility estimated that British living standards would plummet by 7% over the next two years. This independent government body said that Brexit "has had a significant negative impact" on British foreign trade, with the decline amounting to 15% over the long term.

Source: Amid an economic and social crisis, anti-Brexit sentiment is growing in the UK

Who wants to live forever?

I definitely feel the middle-aged white guy urge to focus on health, nutrition, etc. But I just felt really sorry when I watched the start of a video where Bryan Johnson, who sold his company to PayPal, goes through his routine. He just looks… lonely?

Photo below is of Jack Dorsey, former Twitter CEO, who also follows an ascetic lifestyle.

Who wants to live for ever? Not me, with all due respect to Freddie Mercury for asking, and possibly not you either. Only a third of Britons even want to make it to 100, according to a recent Ipsos poll carried out for the British not-for-profit initiative the Longevity Forum. This suggests less a death wish than a fear of what growing old may actually involve. Tellingly, the older the respondent already was, the less enthusiastic they were about getting very much older. Extreme age can look brutal, up close.

Personally, I want very much to live until my child no longer needs me, whenever that may be, and to enjoy some kind of retirement. But beyond that, I just want to live until it feels like enough, and then ideally to have some control over the end. I’d rather have a busy, happy, meaningful life and drop dead at 75 than make it to 150 and run out of ways to fill the endless days.

Source: Who wants to live to 100 on a diet of lentil and broccoli slurry? Mostly rich men | The Guardian

Japanese miniature dioramas

I love these so much.

Miniature Calendar is an incredible ongoing project by Japanese artist Tatsuya Tanaka, that features beautiful miniature dioramas of everyday life using common household objects such as food, cloth, stationery, electronic devices, and even masks.
Source: Japanese Artist Creates Amazing Miniature Dioramas Every Day For 10 Years | Digital Synopsis

(Partially) visualising the Fediverse

About a decade ago, it was possible to visualise your LinkedIn network. I really liked it, especially as I had three distinct groups of connections (EdTech, schools, and Higher Ed).

This website allows you to visualise around 4.5k Fediverse instances, as of last week. You can change the colour and size of the dots depending on number of users, posts, theme, etc.

Exercise.cafe isn’t on there, nor is wao.wtf. But it’s still a useful tool.

Screenshot of Mapstodon

Source: Mapstodon

Collectively-owned Fediverse instances

I’m essentially bookmarking this publicly as it’s a useful reference for Fediverse instances (all currently running Mastodon!) which are collectively owned.

What I’m interested in is diversifying and going beyond this very useful list. First, I’d love examples to be added which are running other Fediverse software than Mastodon. For example, I’ve got a test instance of Misskey running at wao.wtf.

Second, I’m interested in the governance of these instances. If you’re not involved with co-operatives or other organisations that are democratically operated, it can seem like a bit of a black box. So I think we need a collaboratively-created guide to collective decision-making processes when it comes to Fediverse instances.

Fediverse instances with an explicit system of shared governance, usually made legally binding through an incorporated association or cooperative.

This page will list also instances which are closed for registrations and dead instances, so that we can collectively learn from their experience.

Originally created by @nemobis@mamot.fr inspired by a @Matt_Noyes@social.coop thread.

Source: Collectively owned instances - fediparty | Codeberg.org

Prestige and associational value

This is 100% true and one of the reasons that I think that Open Badges and Verifiable Credentials are so awesome. Associational value is built-in for human beings, as we’re social creatures who set store by what other people value.

For example, I’m Dr. Belshaw which has a certain cachet and status in some circles. But people are usually much more interested/impressed by the fact that I worked for Mozilla and that one of our co-op’s clients is Greenpeace.

Them’s the breaks. And I feel like passing on this kind of wisdom to the younger generations is really important, to be honest, as a way that the world actually works.

A lot of people suspect that having-been part of a prestigious organisation (such a a famous university or an "elite" org in your field) gets you an unfair advantage when applying for future jobs.

There are two main avenues you could imagine for this advantage. One is basically nepotism: through the organisation you meet lots of other people who will later give you preferential access to jobs.

A second avenue is throughthe associtional value of the institution: that people with no specific connection to you or that organisation will see the name of Prestigious Institution on your resume and hire you because, well, you were at Prestigious Institution.

[…]

I think associational value often comes out of single-sentence descriptions of what somebody has done, and that therefore there are often relatively-easy ways to get 99% of the associational benefits of a prestigious institution at a much lower cost.

For example, in the magazine-writing world, people are often (approximately) defined by 1-3 of the most famous publications they’ve ever written for: “X’s work has appeared in TK, TK and TK,” or “this is my friend Y, she writes for [famous media brand].”

[…]

I’m not entirely sure how to work around this one, beyond the “try to get a mild affiliation with a prestigious institution, even if it’s an incredibly silly one” hack.

Source: Associational Value | Atoms vs Bits

Richard Hammond's near-death experience

Richard Hammond, co-presenter of the original Top Gear and The Grand Tour reflects on his near-death experience. Worth a watch.

[embed]www.youtube.com/watch

Source: Richard Hammond explains what he experienced during his coma | 310mph Crash | YouTube

Some tips for adding winter cheer

There are some excellent suggestions in this list of 53 things that can give you a lift over the winter months. I’ve highlighted three of my favourites below!

Bowl of fruit with stick-on eyes.

7. Walk with an audio book On a crisp winter’s day, there is no finer companion than 82-year-old actor Seán Barrett. His sublime narration of Mick Herron’s Slough House series, about a bunch of MI5 outcasts, will bring cheer to the gloomiest days.

[…]

18. Buy foods you can’t identify Purchase food in shops where the majority of products have no English on the packaging, so eating what you buy is an adventure. It might be black limes, a box of tamarinds or a rosewater drink with vermicelli pieces. It’s like travelling without travelling.

[…]

50. Sweat in a sauna We’ve all been told about the wellbeing boost of plunging into cold water, wild swimming and turning your shower down to freezing. But who wants to be cold? Book yourself into a sauna. Let the heat and steam soak deep into your bones and sweat out all your worries.

Source: Need a lift? Here’s 53 easy ways to add cheer to your life as winter looms | The Guardian

Convivial social networking

Adam Greenfield composed a thread this morning on Mastodon in which he referenced Ivan Illich’s call for conviviality. This was also referenced in a post by Audrey Watters which was shared a few minutes later in my timeline by Aaron Davis.

Such synchronicity is, of course, entirely random but meshed well with my state of mind this morning. I find it interesting that Audrey thinks it’s ridiculous to think that Mastodon is “what’s next” and instead looks to email. For what it’s worth, I see the Fediverse as being a lot like email, actually.

Given that she’s got a brain and experience several times the size of mine, I’d love it if she wrote more about this…

It's easy to look at the world right now and focus on the shit... The Republican takeover of the House. The economy. The way my body feels after running 6.85 miles on Sunday morning and then sitting in the car for 2+ hours on the drive home. The implosion of Twitter. The ridiculousness of suggesting Mastodon is "what's next." And so on. I mean, I have lots of thoughts on all of these, particularly the Twitter and Mastodon brouhaha. I read an email newsletter that referenced a Twitter thread in which Alexis Madrigal argued that Twitter, at least in its original manifestation, was for "word people." I quite like that framework, and it's helpful in showcasing how Facebook and now TikTok really would rather the ascendant influencers be picture people. TV people, even. It's time to pull out 'Tools for Conviviality', perhaps, for a re-read, because I'm loathe to make the argument that email is, in fact, where we find technological conviviality these days. But that's the direction I'm considering taking the argument. If I were to write about it and think about it more, that is.
Source: The Week in Review: What's Good | Audrey Watters

Mourning what we've lost

I found this an eloquent explanation of emotions and feelings I've experienced over the last couple of weeks as the Fediverse has been 'invaded' by people considering themselves 'refugees' from Twitter.

As Hugh Rundle points out in this post, some of us have already mourned what we'd lost with Twitter and had made our home in a comfy, homely new place. There were rules, both implicit and explicit, about how to behave, but now...

For those of us who have been using Mastodon for a while (I started my own Mastodon server 4 years ago), this week has been overwhelming. I've been thinking of metaphors to try to understand why I've found it so upsetting. This is supposed to be what we wanted, right? Yet it feels like something else. Like when you're sitting in a quiet carriage softly chatting with a couple of friends and then an entire platform of football fans get on at Jolimont Station after their team lost. They don't usually catch trains and don't know the protocol. They assume everyone on the train was at the game or at least follows football. They crowd the doors and complain about the seat configuration.

It's not entirely the Twitter people's fault. They've been taught to behave in certain ways. To chase likes and retweets/boosts. To promote themselves. To perform. All of that sort of thing is anathema to most of the people who were on Mastodon a week ago. It was part of the reason many moved to Mastodon in the first place. This means there's been a jarring culture clash all week as a huge murmuration of tweeters descended onto Mastodon in ever increasing waves each day. To the Twitter people it feels like a confusing new world, whilst they mourn their old life on Twitter. They call themselves "refugees", but to the Mastodon locals it feels like a busload of Kontiki tourists just arrived, blundering around yelling at each other and complaining that they don't know how to order room service. We also mourn the world we're losing.

[...]

I was a reasonably early user of Twitter, just as I was a reasonably early user of Mastodon. I've met some of my firmest friends through Twitter, and it helped to shape my career opportunities. So I understand and empathise with those who have been mourning the experience they've had on Twitter — a life they know is now over. But Twitter has slowly been rotting for years — I went through that grieving process myself a couple of years ago and frankly don't really understand what's so different now compared to two weeks ago.

There's another, smaller group of people mourning a social media experience that was destroyed this week — the people who were active on Mastodon and the broader fediverse prior to November 2022. The nightclub has a new brash owner, and the dancefloor has emptied. People are pouring in to the quiet houseparty around the corner, cocktails still in hand, demanding that the music be turned up, walking mud into the carpet, and yelling over the top of the quiet conversation.

All of us lost something this week. It's ok to mourn it.

Source: Home invasion | Hugh Rundle

Image: Joshua Sukoff

Second-order effects of widespread AI

Sometimes ‘Ask HN’ threads on Hacker News are inane or full of people just wanting to show off their technical knowledge. Occasionally, though, there’s a thread that’s just fascinating, such as this one about what might happen once artificial intelligence is widespread.

Other than the usual ones about deepfakes, porn, and advertising (all which should concern us) I thought this comment by user ‘htlion’ was insightful:

AI will become the first publisher of contents on any platform that exists. Will it be texts, images, videos or any other interactions. No banning mechanisms will really help because any user will be able to copy-paste generated content. On top of that, the content will be generated specifically for you based on "what you like". I expect a backlash effect where people will feel like becoming cattle which is fed AI-generated content to which you can't relate. It will be even worse in the professional life where any admin related interaction will be handled by an AI, unless you are a VIP member for this particular situation. This will strengthen the split between non-VIP and VIP customers. As a consequence, I expect people to come back to localilty, be it associations, sports clubs or neighborhood related, because that will be the only place where they will be able to experience humanity.
Source: What will be the second order effects widespread AI? | Hacker News

Hyperbolic discounting applied to habit-formation

We live near the middle of town, a five minute walk to the leisure centre — and less than that to get to the shops. As a result, we don’t use our cars at all for three days of the week, and I go to the gym at the leisure centre every day.

My grandmother, who wasn’t well-off and who rented all her life, used to ensure that she lived right next to a bus stop. Although she wouldn’t have used the phrase from this article, she knew that she was more likely to travel and get places that way!

You may have heard of hyperbolic discounting from behavioral economics: people will generally disproportionally, i.e. hyperbolically, discount the value of something the farther off it is. The average person judges $15 now as equivalent to $30 in 3-months (an annual rate of return of 277%!).

[…]

But what about when something is farther off in space rather than time?

Say a 1-hour activity is 10 minutes away, compared to 5 minutes away. The total time usage would be 80 vs 70 minutes, about 15% longer. A linear model would predict that it would feel 15% more costly, and then proportionally affect your likelihood of going. In practice though, an extra 10 or 20 minutes of travel time will somehow frequently nudge you into non-participation.

Source: Hyperbolic Distance Discounting | Atoms vs Bits

The (surprising) oldest full sentence in the Canaanite language in Israel

Apparently this comb has an inscription on it which reads “May this tusk root out the lice of the hair and the beard.” It was made from an imported elephant tusk!

The comb measures just 3.5 by 2.5 centimeters (roughly 1.38 by 1 inches), with teeth on both sides, although only the bases remain; the rest of the teeth were likely broken long ago. One side had thicker teeth, the better to untangle knots, while the other had 14 finer teeth, likely used to remove lice and their eggs from beards and hair. Further analysis showed noticeable erosion at the comb's center, which the authors believe was likely due to someone's fingers holding it there during use.

The authors also used X-ray fluorescence spectroscopy, Fourier-transform infrared spectroscopy, and digital microscopy to confirm that the comb is made of ivory from an elephant tusk, suggesting it was imported. The team sent a sample from the comb to the University of Oxford’s radiometric laboratory, but the carbon was too poorly preserved to accurately date the sample.

The inscription consists of 17 letters (two damaged) that together form a complete seven-word sentence. The letters aren’t well-aligned, per the authors, nor are they uniform in size; the letters become progressively smaller and lower in the first row, with letters running from right to left. When whoever engraved the comb reached the edge, they turned it 180 degrees and engraved the second row from left to right. The engraver actually ran out of room on the second row, so the final letter is engraved just below the last letter in that row. Still, said engraver had to be fairly skilled, given the small size of the lettering.

Source: Ancient wisdom: Oldest full sentence in first alphabet is about head lice | Ars Technica

Rituals for moving jobs when working from home

Terence Eden reflects on changing jobs when working from home and how… weird it can be. While I’ve been based from two different converted garages during the past decade, I’ve travelled a lot so it has felt different.

I can imagine, though, if that’s not the case, it can all feel a little bit discombobulating!

One Friday last year, I posted some farewell messages in Slack. Removed myself from a bunch of Trello cards. Had a quick video call with the team. And then logged out of my laptop. I walked out of my home office and sat in my garden with a beer.

The following Monday I opened the door to the same office. I logged in to the same laptop. I logged into a new Slack - which wasn’t remarkably different from the old one. Signed in to a new Trello workspace - ditto. And started a video call with my new team.

I’ll admit, It didn’t feel like a new job!

There was no confusing commute to a new office. No having to work out where the toilets and fire exits were. No “here’s your desk - it’s where John used to sit, so people might call you John for a bit”. I didn’t even have to remember people’s names because Zoom showed all my colleagues' names & job titles.

There was no waiting in a liminal space while receptionists worked out how to let me in the building.

In short, there was no meaningful transition for me.

Source: Job leaving rituals in the WFH era | Terence Eden’s Blog

Decentralisation begins at decentring yourself

Aral Balkan, who has 22,000 followers on the Fediverse and who recently had a birthday, has written about the influx of people from Twitter. As I’ve found, especially on my personal blog, you can essentially run a Distributed Denial of Service (DDoS) attack on yourself by posting a link to your blog to the Fediverse. As each server pings it, the server can eventually buckle under the weight.

What follows is a really useful post in terms of Aral’s journey towards what he calls the ‘Small Web’. While I don’t necessarily agree that we should all have our own instances, I do think it’s useful for organisations of every size to run them.

If Elon Musk wanted to destroy mastodon.social, the flagship Mastodon instance, all he’d have to do is join it.

Thank goodness Elon isn’t that smart.

I jest, of course… Eugen would likely ban his account the moment he saw it. But it does illustrate a problem: Elon’s easy to ban. Stephen, not so much. He’s a national treasure for goodness’ sake. One does not simply ban Stephen Fry.

And yet Stephen can similarly (yet unwittingly) cause untold expense to the folks running Mastodon instances just by joining one.

The solution, for Stephen at least, is simple: he should run his own personal instance.

(Or get someone else to run it for him, like I do.)

Running his own instance would also give Stephen one additional benefit: he’d automatically get verified.

After all, if you’re talking to, say, @stephen@social.stephenfry.com, you can be sure it’s really him because you know he owns the domain.

Source: Is the fediverse about to get Fryed? (Or, “Why every toot is also a potential denial of service attack”) | Aral Balkan

Organisations are not just joining the Fediverse, they're setting up their own instances

It’s great to see that Raspberry Pi Ltd. and other organisations are setting up their own servers. Not only does it enable them to verify themselves, but that of their employees and affiliates really easily.

I’m sure it won’t all be smooth sailing ahead for the Fediverse, especially when it comes to trust and verification. But I’m optimistic that the recent migration from Twitter is ultimately for the good of the human species.

We’ve opted to host our own instance. We’ve done this because, with multiple instances out there, we had to decide how to make sure people following us knew that our Raspberry Pi account was the “real” one.

Distributed systems are an interesting corner case when it comes to trust. Because when it comes to identity, you eventually have to trust someone. Whether that’s a corporation, like Twitter, or a government, or the person themselves. Trust is needed.

With Mastodon the root of trust for identity is the admin of the instance you’re on, and the admins on all the other instances, where you’re trusting them to remove “fake” accounts. Or, if you’re running your own instance, then it’s the domain name registrars. The details of our domain registration of the raspberrypi.social domain may be redacted for privacy, but our domain registrar knows who we are, and is the same registrar we use for all our other domains. They trust our government-issued identity to prove that we are Raspberry Pi Ltd. You can trust them, they trust the government, and ultimately the government trusts us because they can use Ultima Ratio Regum, the last argument of kings.

Source: An escape pod was jettisoned during the fighting | Raspberry Pi

A cluttered desk is a sign of genius

Perhaps it's because I'm not a designer like the author of this post, but organising your desk space like this leaves me cold. My space looks like this.

Minimalist desk

I’m proud of what I’ve done with my desk setup over the last five years. Through careful observation of what’s working and what’s not, I’ve continued to improve how it serves my creative pursuits. Still, when I look at it in the morning, I get a rush of creative energy and optimism.

Source: The Evolution of the Desk Setup | Arun

Quotation-as-title comes from a plaque my father had on his (spectacularly untidy) desk...

Decentralising online learning

A “technical presentation that is structured and designed for a non-technical audience” by Stephen Downes. With the Twitter lifeboats again being deployed, this is a timely look at how federated and decentralised technologies can be used for removing the silos from online learning.

As a new generation of digital technologies evolves we are awash in new terms and concepts: the metaverse, the fediverse, blockchain, web3, acitivitypub, and more. This presentation untangles these concepts and presents them from the perspective of their impact on open learning.

Source: Open Learning in the Fediverse | Stephen Downes

Presenteeism, overwork, and being your own boss

I spend a lot of time on the side of football pitches and basketball courts watching my kids playing sports. As a result, I talk to parents and grandparents from all walks of life, who are interested in me being a co-founder of a co-op — and that, on average, I work five-hour days.

This wasn’t always the case, of course. I’ve been lucky, for sure, but also intentional about the working life I want to create. And I’m here to tell you that unless you at least partly own the business you work for, you’re going to be overworking until the end of your days.

Hidden overwork is different to working long hours in the office or on the clock at home – instead, it’s the time an employee puts into tasks on top of their brief. There are plenty of reasons people take on this extra work: to be up to speed in meetings; appear ‘across issues’ when asked about industry developments; or seem sharp in an environment in which a worker is still trying to establish themselves.

There are myriad ways a person’s day job can slip into their non-working hours: think a worker chatting to someone from their industry at their child’s birthday party, and suddenly slipping into networking mode. Or perhaps an employee hears their boss mention a book in a meeting, so they download and listen to it on evening walks for a week, stopping occasionally to jot down some notes.

[…]

However, for many, this overwork no longer feels like a choice – and that’s when things go bad. This can especially be the case, says [Alexia] Cambon [director of research at workplace-consultancy Gartner’s HR practice], when these off-hours tasks become another form of presenteeism – for instance, an employee reading a competitor’s website and sharing links in a messaging channel at night, just so they can signal to their boss they’re always on. “We’re seeing… more employees who feel monitored by their organisations, and then feel like they have to put in extra hours,” she says.

As such, this hidden overwork can do a lot of potential damage if it becomes an unspoken requirement. “If there’s more expectation and burden associated with it, that’s where people are going to have negative consequences,” says Nancy Rothbard, management professor at The Wharton School of the University of Pennsylvania, US. “That’s where it becomes tough on them.”

Source: The hidden overwork that creeps into so many jobs | BBC Worklife

Hyperfinancialisation has taken over UK politics

I’m reading This Could Be Our Future by (Kickstarter co-founder) Yancey Strickler at the moment. It rails against hyperfinancialisation and then provides a way of thinking about the world differently.

As this opinion piece in The Guardian points out, we need a way of thinking about politics and the market which isn’t driven (literally!) by investment bankers.

City

Rishi Sunak’s first job was at the US investment bank Goldman Sachs. He went on to spend 14 years in the sector before becoming an MP. In many ways, his unelected appointment marks the highpoint of big finance’s takeover of Britain’s political and economic system – a quiet infiltration of Westminster and Whitehall has been taking place over several decades and gone largely unremarked.

[…]

Looking at the coalition government, every senior figure who managed Treasury economic policy – George Osborne, Danny Alexander, David Cameron, Rupert Harrison, John Kingman and Nick Macpherson – later gained well-paid positions in the financial sector. And three of the last five chancellors have come from the sector. Jeremy Hunt’s current advisers all come from investment banking.

This matters because investment bankers have very little to do with the real economy that ordinary people inhabit. They don’t run businesses. They don’t deal with actual product and customer markets. Their work is confined to financial markets, aiding corporate financial manoeuvres, and trading and managing their own financial assets. Their primary aim is to make profits from such activities, regardless of how it affects the real economy, the national interest or employees. If that means shorting the pound or breaking up a successful company for quick profits, then so be it.

[…]

And an overpowered financial sector has certainly not been conducive to good governance, either. There’s nothing democratic about extensive public service cuts being used to pay for saving the private banking sector, as in the aftermath of the 2008 crash, or the bond markets determining the credibility of governments, or the fact that the bankers and hedge funds are the biggest single source of Conservative party donations. Nor is trust in British democracy likely to be enhanced by a super-rich PM who has allegedly avoided taxes and made a fortune as a financier at the nation’s cost.

Source: With Rishi Sunak, the City’s takeover of British politics is complete | Aeron Davis

An anarchist take on the Twitter acquisition

I’m quoting this liberally, as it’s excellent. I was on Twitter from almost when it began in January 2007 through to late 2021 and the journey from protest tool to toy of plutocrats has been brutal.

What if Trump had been able to make common cause with a critical mass of Silicon Valley billionaires? Would things have turned out differently? This is an important question, because the three-sided conflict between nationalists, neoliberals, and participatory social movements is not over.

To put this in vulgar dialectical terms:

  • Thesis: Trump’s effort to consolidate an authoritarian nationalism
  • Antithesis: opposition from neoliberal tycoons in Silicon Valley
  • Synthesis: Elon Musk buys Twitter
Understood thus, Musk’s acquisition of Twitter is not just the whim of an individual plutocrat—it is also a step towards resolving some of the contradictions within the capitalist class, the better to establish a unified front against workers and everyone else on the receiving end of the violence of the capitalist system. Whatever changes Musk introduces, they will surely reflect his class interests as the world’s richest man.

[…]

[I]nnovative models do not necessarily emerge from the commercial entrepreneurism of the Great Men of history and economics. More often, they emerge in the course of collective efforts to solve one of the problems created by the capitalist order. Resistance is the motor of history. Afterwards, opportunists like Musk use the outsize economic leverage that a profit-driven market grants them to buy up new technologies and turn them definitively against the movements and milieux that originally produced them.

[…]

Musk claims that his goal is to open up the platform for a wider range of speech. In practice, there is no such thing as “free speech” in its pure form—every decision that can shape the conditions of dialogue inevitably has implications regarding who can participate, who can be heard, and what can be said. For all we might say against them, the previous content moderators of Twitter did not prevent the platform from serving grassroots movements. We have yet to see whether Musk will intentionally target activists and organizers or simply permit reactionaries to do so on a crowdsourced basis, but it would be extremely naïve to take him at his word that his goal is to make Twitter more open.

[…]

Effectively, Musk’s acquisition of Twitter returns us to the 1980s, when the chief communications media were entirely controlled by big corporations. The difference is that today’s technologies are participatory rather than unidirectional: rather than simply seeing newscasters and celebrities, users see representations of each other, carefully curated by those who run the platforms. If anything, this makes the pretensions of social media to represent the wishes of society as a whole more insidiously persuasive than the spectacles of network television could ever be.

[…]

It’s you against the billionaires. At their disposal, they have all the wealth and power of the most formidable empire in the history of the solar system. All you have going for you is your own ingenuity, the solidarity of your comrades, and the desperation of millions like you. The billionaires succeed by concentrating power in their own hands at everyone else’s expense. For you to succeed, you must demonstrate ways that everyone can become more powerful. Two principles confront each other in this contest: on one side, individual aggrandizement at the expense of all living things; on the other, the potential of the individual to increase the self-determination of all human beings, all living creatures.

Source: The Billionaire and the Anarchists: Tracing Twitter from Its Roots as a Protest Tool to Elon Musk’s Acquisition | CrimethInc

Twitter the disaster clown car company

I didn’t forsee Elon Musk buying Twitter when I deactivated my verified account about a year ago. But it was already an algorithmic hellscape.

As this article points out, Big Tech no longer really does very interesting stuff technically. It’s all about the politics these days.

[T]he problems with Twitter are not engineering problems. They are political problems. Twitter, the company, makes very little interesting technology; the tech stack is not the valuable asset. The asset is the user base: hopelessly addicted politicians, reporters, celebrities, and other people who should know better but keep posting anyway. You! You, Elon Musk, are addicted to Twitter. You’re the asset. You just bought yourself for $44 billion dollars.

[…]

[Y]ou can write as many polite letters to advertisers as you want, but you cannot reasonably expect to collect any meaningful advertising revenue if you do not promise those advertisers “brand safety.” That means you have to ban racism, sexism, transphobia, and all kinds of other speech that is totally legal in the United States but reveals people to be total assholes. So you can make all the promises about “free speech” you want, but the dull reality is that you still have to ban a bunch of legal speech if you want to make money. And when you start doing that, your creepy new right-wing fanboys are going to viciously turn on you, just like they turn on every other social network that realizes the same essential truth.

[…]

The essential truth of every social network is that the product is content moderation, and everyone hates the people who decide how content moderation works. Content moderation is what Twitter makes — it is the thing that defines the user experience. It’s what YouTube makes, it’s what Instagram makes, it’s what TikTok makes. They all try to incentivize good stuff, disincentivize bad stuff, and delete the really bad stuff. Do you know why YouTube videos are all eight to 10 minutes long? Because that’s how long a video has to be to qualify for a second ad slot in the middle. That’s content moderation, baby — YouTube wants a certain kind of video, and it created incentives to get it. That’s the business you’re in now. The longer you fight it or pretend that you can sell something else, the more Twitter will drag you into the deepest possible muck of defending indefensible speech. And if you turn on a dime and accept that growth requires aggressive content moderation and pushing back against government speech regulations around the country and world, well, we’ll see how your fans react to that.

Source: Welcome to hell, Elon | The Verge

Image: DALL-E 2

AI is coming for middle management

It’s hard not to agree with this. Things may play out a little different in the EU, but in the USA and UK I can foresee the middle classes despairing.

Legacy businesses will have to rely on retail and hourly support staff to be able to reduce management head count as a means of freeing up money for implementing automation. In order to do that, they will need to implement AI management tools; chat bots, scheduling, negotiating, training, data collection, diagnostic analysis, etc., before hand.

Otherwise, they will be left to rely on an overly bureaucratic and entrenched middle management layer to do so and that solution is likely to come from outsourcing or consultants. All the while, the retail environment deteriorates as workers are tasked to replace themselves without any additional benefits; service declines, implementation falters, costs go up, more consulting required.

Union formation across the retail landscape will force corporations to reduce management head count and implement AI management solutions which focus on labor relations. The once fungible and disposable retail worker will be transformed into a highly sought after professional who will be relied upon specifically for automation implementation.

Source: AI will replace middle management before robots replace hourly workers | Chatterhead Says

Image: DeepMind

Being 'quietly fired' at work

I’ll not name the employer, and this wasn’t recent, but I’ve been ‘quietly fired’ from a job before. I never really knew why, other than a conflict of personalities, but there was no particular need for pursuing that path (instead of having a grown-up conversation) and it definitely had an impact on my mental health.

I think part of the reason this happens is because a lot of organisations have extremely poor HR functions and managers without much training. As a result, they muddle through, avoiding conflict, and causing more problems as a result.

There may not always be a good fit between jobs and the workers hired to do them. In these cases, companies and bosses may decide they want the worker to depart. Some may go through formal channels to show employees the door, but others may do what Eliza’s boss did – behave in such a way that the employee chooses to walk away. Methods may vary; bosses may marginalise workers, make their lives difficult or even set them up to fail. This can take place over weeks, but also months and years. Either way, the objective is the same: to show the worker they don’t have a future with the company and encourage them to leave.

In overt cases, this is known as ‘constructive dismissal’: when an employee is forced to leave because the employer created a hostile work environment. The more subtle phenomenon of nudging employees slowly but surely out of the door has recently been dubbed ‘quiet firing’ (the apparent flipside to ‘quiet quitting’, where employees do their job, but no more). Rather than lay off workers, employers choose to be indirect and avoid conflict. But in doing so, they often unintentionally create even greater harm.

[…]

An employee subtly nudged out the door isn’t without legal recourse, either. “If you were to look at each individual aspect of quiet firing, there’s likely nothing serious enough to prove an employer breach of contract,” says Horne. “However, there’s the last-straw doctrine: one final act by the employer which, when added together with past behaviours, can be asserted as constructive dismissal by the employee.”

More immediate though, is the mental-health cost to the worker deemed to be expendable by the employer – but who is never directly informed. “The psychological toll of quiet firing creates a sense of rejection and of being an outcast from their work group. That can have a huge negative impact on a person’s wellbeing,” says Kayes.

Source: The bosses who silently nudge out workers | BBC Worklife

Jacobin reviews the creator of Ethereum's new book

This is written in typical bombastic Jacobin style, and I’ve yet to read Vitalik Buterin’s book, but I have to say I can’t disagree with the conclusion: there is no leftist case for crypto.

Perhaps there was in the beginning? But now it’s easy to see where it’s headed. And it’s not in any way a socialist enterprise.

Their intentions aside, let’s try asking with seriousness: Is there a leftist case for crypto? Helpfully, Ethereum cofounder Buterin has published a book, 'Proof of Stake: The Making of Ethereum and the Philosophy of Blockchains', in which he outlines how cryptocurrencies represent a “new method of social incentivization” that will offer a new democratic “way to pool together our money and support public projects and activities that help create the society we want to see.”
The book is helpful, but not exactly in the way Buterin thinks. It reveals how Buterin’s case is wholly, shockingly bereft of a political vision to achieve such a society, let alone a vision rooted in the most basic political and moral principles of the Left. If 'Proof of Stake' is any indication of the existing rhetoric and principles from which one could construct a leftist case for crypto, then no leftist case for crypto can be made.

[…]

What emerges in 'Proof of Stake'... is not a clearer leftist case for crypto but a clearer sense of Buterin’s essayistic style. The instant a reader wants to hear more about this oft-mentioned equitable world of public goods that crypto can bring us, Buterin scampers back into technical discussions.

[…]

As a writer, Buterin is the perfect embodiment of crypto as we’ve come to know it: he strays from his technical world long enough to look past the convoluted discourse and glimpse the need for a political framework, but then, whether by fright or disinterest, he returns to his comfort zone. He writes with admirable passion and unusual clarity about these technical issues that his technology is confronting, but the result doesn’t add up to anything resembling a leftist case for crypto — most likely because there isn’t one.
Source: There Is No Leftist Case for Crypto | Jacobin

What does work look like? (redux)

If you’re digging a hole or otherwise doing manual work, it’s obvious when you’re working and when you’re not. The same is true, to a great extent, when teaching (my former occupation).

Doing what I do now, which is broadly under the banner of ‘knowledge work’, it can be difficult for others to see the difference between when I’m working and when I’m not. This is one of the reasons that working from home is so liberating.

The funny thing is, sitting alone thinking doesn’t “look” like work. Even more so if it’s away from your computer.

[…]

I recently had a conversation with a long-time colleague, someone I know and respect. I found it interesting that even he, who has worked in software since the 90’s, still felt odd when he wasn’t at his computer “working”. After decades of experience, he knew and understood that the most meaningful conceptual progress he made on problems was always away from his computer: on a run, in the shower, laying in bed at night. That’s where the insight came. And yet, even after all these years, he still felt a strange obligation to be at his computer because that’s too often our the metal image of “working”.

Source: What “Work” Looks Like | Jim Nielsen’s Blog

Image: Charles Deluvio

It's time to move on from Twitter

It’s almost a year now that I finally deactivated my Twitter account with no intention of going back to it. Like Ben Werdmuller in this article, I had a yearly ‘detox’ from the service. Coming back from it became harder and harder.

Twitter from 2007 to about 2011 (coincidentally the birth years of my children!) was amazing. It was definitely helpful in terms of my career, and I’m still in touch with people who I got to know via Twitter from that period.

But I don’t need it any more. I use various Fediverse accounts and LinkedIn to keep in touch with people personally and professionally. I also don’t share as much of my life as I used to online, partly because the world has changed and partly because therapy showed me it was all part of the mask I’m wearing.

So yes, let’s pour one out for Twitter, which if Musk’s acquisition goes ahead, is going to be a empty husk of what it was formerly. Life moves on.

For a few years, it was tradition that I’d go offline for the year at around Thanksgiving, to give myself some time to recover from the cognitive load of all those notifications. I don’t think the constant dopamine rush is in any way good for you, but the site’s function as a de facto town square has also helped me learn and grow. It’s a health hazard and an information firehose; a community and an attack vector for democracy. More than even Facebook, I think it’s defined the internet’s role in democratic society during the 21st century.

[…]

As big tech silos diminish in stature, the all-in-one town squares we’ve enjoyed on the internet are going to start to fade from view. In some ways, it’s akin to the decline of the broadcast television networks: whereas there used to be a handful of channels that entire nations tuned into together, we now enjoy content that’s fragmented over hundreds. The same will be true of our community hangouts and conversations. In the same way that broadcast television didn’t really capture the needs of the breadth of its audience but instead enjoyed its popularity because that’s what was there at the time, we’ll find that fragmented communities better fit the needs of the breadth of diverse society. It’s a natural evolution.

It’s also one that demands better community platforms. We’re still torn between 1994-era websites, 1996-era Internet forums, and 2002-era social networks, with some video sharing platforms in-between. We could use more innovation in this space: better spaces for different kinds of conversations (and particularly asynchronous ones), better applications of distributed identities, better ways to follow conversations across all the places we’re having them. This is a time for new ideas and experimentation.

Source: The end of Twitter | werd.io

Image: Nathan Dumlao

Bridging the divide

Sure, it’s an advert for beer, but it’s also a brilliant example of how you can bring people together IRL to get to know one another despite seemingly-intractable differences.

[embed]www.youtube.com/watch

Source: This New Heineken Ad is Briliant #OpenYourWorld | YouTube

AI everywhere in education

Jon Dron makes a good point here that we need to put the humanity back into education, otherwise we’re going to have AI everywhere and a completely broken system.

I thought it would be fun, in an ironic kind of way, to use an AI art generator to illustrate this post…

To a significant extent, we already have artificial students, and artificial teachers teaching them. How ridiculous is that? How broken is the system that not only allows it but actively promotes it?

[…]

This is a wake-up call. Soon, if not already, most of the training data for the AIs will be generated by AIs. Unchecked, the result is going to be a set of ever-worse copies of copies, that become what the next generation consumes and learns from, in a vicious spiral that leaves us at best stagnant, at worst something akin to the Eloi in H.G. Wells’s Time Machine. If we don’t want this to happen then it is time for educators to reclaim, to celebrate, and (perhaps a little) to reinvent our humanity. We need, more and more, to think of education as a process of learning to be, not of learning to do, except insofar as the doing contributes to our being. It’s about people, learning to be people, in the presence of and through interaction with other people. It’s about creativity, compassion, and meaning, not the achievement of outcomes a machine could replicate with ease. I think it should always have been this way.

Source: So, this is a thing… | Jon Dron

Image: DALL-E 2 (“robot painting a picture of a robot painting a picture of a robot, in the style of Rene Magritte”)

Apple Watch Ultra vs The Scottish Highlands

Happy as I am with my Garmin Venu 2s, if I didn’t need to also buy an iPhone to use one, I probably already would have bought an Apple Watch Ultra. Despite my skinny wrists, my recent health scare means that the cellular capability and ECG combined with a more-than-24-hour battery life would seal the deal.

So I was interested in this review by someone who took the Ultra up into the Scottish Highlands. It turns out he loved it.

I don’t think you can properly test a device like this without taking it out into the field. So the day my Ultra arrived, I booked myself onto a sleeper train up to the Scottish Highlands for a three day hiking trip to really see how it performed. I ended up hiking just over 61 miles.

The standard Apple Watches are incredibly capable devices, that I’ve used to great utility on countless hiking trips, but using them in that context always felt a bit like I was pushing the boundary of what it was intended for or capable of. Whereas the Ultra is very much designed for the backcountry context. It is more rugged, more long lasting and much easier to read…all while still being 100% an Apple Watch and not compromising any of the features that make a standard Apple Watch so useful.

Source: Testing an Apple Watch Ultra in the Scottish Highlands | David Smith

Our range of legible emotions is being constricted

A typically thought-provoking piece by L. M. Sacasas which, ironically, I’ve got plenty of time to read, process, and react to after getting up ridiculously early this morning!

It’s interesting to read this from a UK context, after an enforced mourning period after the death of the Queen. This piece definitely speaks into that context, about the “range of legible emotions” being “constricted”. After all, you weren’t even allowed to hold up a blank sheet of paper in public.

The rhythms of digital media rush me on from crisis to crisis, from outrage to outrage. Moreover, in rapid succession the same feed brings to me the tragic and the comic as well as the trivial and the consequential. So, it’s not just that I do not have the time or space to think deeply. I also do not have the time or space to feel deeply. I skim the surface of each emotional experience, but rarely can I plumb its depths or sound out its meaning. Consequently, I lose something of the richness of the emotions and miss out on their appropriate consolations. I feel enough to be overwhelmed and depleted, but I cannot inhabit an emotional experience long enough to see it through to its natural fulfillment with whatever growth of character or richness of experience that might entail.

[…]

The policing of other’s emotional expressions is one sign that the discourse is colonizing our emotional life. Such policing tends to generate an artificiality of (usually negative or critical) emotional expression, and conditions us to avoid certain (usually positive or earnest) emotional expressions. Under these conditions, emotional life is stunted. The range of legible emotions is constricted. Complex or subtle emotional experiences are overwhelmed by the demand for intense and uncomplicated emotional expressions.

Source: Impoverished Emotional Lives | The Convivial Society

Image: DALL-E 2 (“policing emotions, in the style of Leonid Afremov”)

Censorship and the porn tech stack

They say that technical innovation often comes from the porn industry, but the same is true of new forms of censorship.

For those who don’t know or remember, Tumblr used to have a policy around porn that was literally “Go nuts, show nuts. Whatever.” That was memorable and hilarious, and for many people, Tumblr both hosted and helped with the discovery of a unique type of adult content.

[…]

[N]o modern internet service in 2022 can have the rules that Tumblr did in 2007. I am personally extremely libertarian in terms of what consenting adults should be able to share, and I agree with “go nuts, show nuts” in principle, but the casually porn-friendly era of the early internet is currently impossible….

[…]

If you wanted to start an adult social network in 2022, you’d need to be web-only on iOS and side load on Android, take payment in crypto, have a way to convert crypto to fiat for business operations without being blocked, do a ton of work in age and identity verification and compliance so you don’t go to jail, protect all of that identity information so you don’t dox your users, and make a ton of money. I estimate you’d need at least $7 million a year for every 1 million daily active users to support server storage and bandwidth (the GIFs and videos shared on Tumblr use a ton of both) in addition to hosting, moderation, compliance, and developer costs.

Source: Matt on Tumblr | Why “Go Nuts, Show Nuts” Doesn’t Work in 2022

Image: Alexander Grey on Unsplash

Google Stadia as pandemic fever dream

I think the comment at the end of this article about people being wary of Stadia because Google tends to shut down services is spot-on. I really liked Stadia, and bought five controllers which I either used within our family or gifted.

During the pandemic, I completed Sniper Elite 4 and all of the DLCs via Stadia. I bought FIFA 22 and Cyberpunk 2077 at full-price as I crossed my fingers behind my back hoping the service would survive.

Ultimately, being refunded for hardware purchases and games I bought is a win-win situation for me. I cancelled my Stadia Pro account earlier this year, dabbling first with Xbox Game Cloud via a Razer Kishi, then upgrading my PlayStation Plus account on the PS5, and more recently investing in a Steam Deck.

The good news is that the true Armageddon situation for Stadia customers is not happening. Google is issuing refunds, which will save dedicated Stadia players from potentially losing hundreds of dollars in unplayable games. The post says: "We will be refunding all Stadia hardware purchases made through the Google Store, and all game and add-on content purchases made through the Stadia store." That notably excludes payments to the "Stadia Pro" subscription service, and you won't get hardware refunds from non-Google Store purchases, but that's a pretty good deal. Existing Pro users will be able to play, free of charge, from now until the shutdown date. The controllers are still useful as wired USB controllers, and a campaign is already starting to get Google to unlock the Bluetooth connection.

[…]

Google Stadia never lived up to its initial promise. The service, which ran a game in the cloud and sent each individual frame of video down to your computer or phone, was pitched as a gaming platform that would benefit from Google’s worldwide scale and streaming expertise. While it was a trailblazing service, competitors quickly popped up with better scale, better hardware, better relationships with developers, and better games. The service didn’t take off immediately and reportedly undershot Google’s estimates by “hundreds of thousands” of users. Google then quickly defunded the division, involving the high-profile closure of its in-house development studio before it could make a single game.

[…]

Google’s damaged reputation made the death of Stadia a self-fulfilling prophecy. No one buys Stadia games because they assume the service will be shut down, and Stadia is forced to shut down because no one buys games from it.

Source: Google kills Stadia, will refund game purchases | Ars Technica

Brexit Britain = hungry kids

As a former teacher, I almost cried reading this. Can someone with some authority and leadership stand up and say not only was Brexit a terrible idea, but the current government’s fiscal “strategy” will absolutely break this country?

Children are so hungry that they are eating rubbers or hiding in the playground because they can’t afford lunch, according to reports from headteachers across England.

[…]

One school in Lewisham, south-east London, told the charity about a child who was “pretending to eat out of an empty lunchbox” because they did not qualify for free school meals and did not want their friends to know there was no food at home.

Community food aid groups also told the Observer this week that they are struggling to cope with new demand from families unable to feed their children. “We are hearing about kids who are so hungry they are eating rubbers in school,” said Naomi Duncan, chief executive of Chefs in Schools. “Kids are coming in having not eaten anything since lunch the day before. The government has to do something.”

Source: Schools in England warn of crisis of ‘heartbreaking’ rise in hungry children | The Guardian

Your brain rewires itself after age 40

I turn 42 later this year, and this would explain a lot. Not in terms of me being unable to be super-efficient and productive, but just in terms of seeing connections everywhere.

In a systematic review recently published in the journal Psychophysiology, researchers from Monash University in Australia swept through the scientific literature, seeking to summarize how the connectivity of the human brain changes over our lifetimes. The gathered evidence suggests that in the fifth decade of life (that is, after a person turns 40), the brain starts to undergo a radical “rewiring” that results in diverse networks becoming more integrated and connected over the ensuing decades, with accompanying effects on cognition.

[…]

Early on, in our teenage and young adult years, the brain seems to have numerous, partitioned networks with high levels of inner connectivity, reflecting the ability for specialized processing to occur. That makes sense, as this is the time when we are learning how to play sports, speak languages, and develop talents. Around our mid-40s, however, that starts to change. Instead, the brain begins becoming less connected within those separate networks and more connected globally across networks. By the time we reach our 80s, the brain tends to be less regionally specialized and instead broadly connected and integrated.

[…]

“During the early years of life, there is a rapid organization of functional brain networks. A further refinement of the functional networks then takes place until around the third and fourth decade of life. A multi-faceted interplay of potentially harmful and compensatory changes can follow in aging,” the reviewers concluded.

Source: The brain undergoes a great "rewiring" after age 40 | Big Think

Gaming on the go (or anywhere)

I finally caved and bought a Steam Deck this week. I’ve loads of Steam games that I’ve collected over the years and some of them are amazing on the Deck. GRID motorsport, for example, as well as Star Wars Squadrons.

This list is a reminder to myself to explore some other, different kinds of games that I don’t usually play.

One of the neat things about the Steam Deck is that even before you’ve wrenched the handheld PC from its cardboard box, you’ll probably already own a bunch of games for it, as it’s designed to be naturally compatible with as much of the existing Steam catalogue as possible. Some games are more Deck-ready than others, however, so if you’re a newly minted owner looking for where to start, perhaps this list of the 30 best Steam Deck games might be of service?
Source: The 30 best Steam Deck games | Rock Paper Shotgun

You don't have to be the best to be valuable

A timely reminder via Emma Cragg’s latest newsletter that sharing our own perspective is enough. I particularly enjoyed the inclusion of the author’s daughter’s curl at the bottom of the newsletter as a reminder than not everything has to be ‘the best’ to have value.

I can’t tell you how many hours I’ve spent questioning if anything I have to say is worthy of being shared — questioning my own creativity, my own ideas, my own experiences put into words, my own writing and art. I’ve questioned if it matters at all since there are a million other people doing the same thing. I’ve questioned if it’s just adding more noise and consumption in a world over-stuffed with exactly that. I’ve questioned if it should even be worked on if it isn’t going to be the best. I’ve questioned my own enoughness in relation to what I create, what I put into the world, what I choose to say out loud and how I say it. I’ve questioned this newsletter, these words, this exact moment.

[…]

Yet my questioning of my work bypasses an important truth: no one else can do my work because no one else is me. And no one else can do your work because no one else is you. When I write, I write with my entire being: my lived experience and history, my genes and blood, my vision and longing, my grief and hope, my path and where I come from, my vantage point and opinion, my heart and soul — things only I have that cannot be replicated. Similarly, only you can do the work you do — whether it’s parenting or creating art, working on cars or computers, gardening or running, performing or teaching — only you can do what you do in the exact way you do it.

[…]

We easily forget that what we create is part of a web — part of something bigger — part of a huge tapestry of others sharing themselves and their work in the ways only they can, right alongside us. And when we choose to show up for our work, we add to the web in a way that makes life more full, more rich, more beautiful. We place our piece in the tapestry in a way only we can, which enhances the whole of it. We add our voice to a collective choir who may all be saying the same thing, but how much sweeter is it when there’s a whole room of it, a whole stadium, a whole world?

Source: Not the best | Human Stuff from Lisa Olivera

Teaching kids about anonymity

This website, riskyby.design, is a project of the 5Rights Foundation. It does a good job of talking about the benefits and drawbacks of anonymity in a way that isn’t patronising.

Chat app with anonymous user

Online anonymity can take many forms, from pseudonyms that conceal “real” identities to private browsers or VPNs that allow users to be “untraceable.” There are also services designed specifically to grant users anonymity, known as “anonymous apps”.

Often conflated with privacy, true anonymity - the total absence of personally identifying information - is difficult to achieve in a digital environment where traces of ourselves are left every time we engage with a service. Anonymity is best considered on a continuum, ranging “from the totally anonymous to the thoroughly named”.

People have lots of reasons for being anonymous online. While anonymity affords a degree of protection to people like journalists, whistle-blowers and marginalised users, the lack of traceability that some types of anonymity offer may prevent people from being held accountable for their actions.

Source: Risky-By-Design | 5Rights Foundation

Sharing can be hard (online)

Granular permissions between private and public spaces is a hard problem to solve, as this blog post shows.

A few years ago, Apple acquired Color Labs, who were trying to solve the ‘share with contacts based on an ‘elastic social graph’. These days, I imagine this kind of problem being solved by Bonfire.

I wanted to share the pics and videos with the people I know, so they too can see (if they like) the awesome event that I just went to.

But I had a problem that was recurring for a while, that is how to share different photos with the different connections that I have. There are photos that I can share publicly, and there are photos that I don’t want some people to see, such as my students, acquaintances, and work-related colleagues,

Source: The rings of share – the unsolved problem of sharing | Rukshan’s Blog

Hierarchy is bad for business

I think this is a great post for people who realise that there might be something wrong with the hierarchy-by-default way we run organisations and society. It’s hard not to come away from it feeling a little liberated.

As someone who has spent the last few years in a co-op with consent-based decision-making and a flat structure, however, I don’t buy the ‘hierarchy is here to stay’ nihilism. Instead, although it’s not what we’ve been brought up to be used to, something like sociocratic circles can scale infinitely!

Being an adult means not measuring yourself entirely on other people’s definition of success. Personal growth might come in the guise of a big promotion, but it also might look like a new job, a different role, a swing to management or back, becoming well-known as a subject matter expert, mentoring others, running an affinity group, picking up new skill sets, starting a company, trying your hand at consulting, speaking at conferences, taking a sabbatical, having a family, working part time, etc. No one gets to define that but you.

[…]

Why do people climb the ladder? “Because it’s there.” And when they don’t have any other animating goals, the ladder fills a vacuum.

But if you never make the leap from externally-motivated to intrinsically-motivated, this will eventually becomes a serious risk factor for your career. Without an inner compass (and a renewable source of joy), you will struggle to locate and connect with the work that gives your life meaning. You will risk burnout, apathy and a serious lack of fucks given..

Source: The Hierarchy Is Bullshit (And Bad For Business) | charity.wtf

'Even over' statements

Aaron Hirtenstein mentioned this post to me earlier in the week, thinking that it might be useful for a collaborative project on which we’re working.

The idea is to try and prioritise one good thing over another and, as such, seems to be influenced by the Manifesto for Agile Software Development.

[I]f everything is a priority, nothing is priority. As you’ve no doubt found from your own experience, the “we can have it all” mindset fails frequently as we repeatedly come up short trying to be the best at everything.A better approach is to make trade-offs explicit, by choosing one thing over another thing. Done well, it will result in focus, clarity, alignment, better decision-making, and competitive edge. We want to share with you a practical method that we often use with our clients: the even over statement.

[…]

An even over statement is a phrase containing two positive things, where the former is prioritized over the latter.

[…]

Here are a few examples:

Product tradeoffs Exclusive product line even over mass market adoption Amazing customer service even over new product features Mobile experience even over desktop experience Revenue growth even over user growth

Culture tradeoffs Collaboration even over focus Progress even over perfection Honest feedback even over harmony Impact even over following a plan Quality even over volume Hiring team players even over deep experts

Source: Even over statements: The prioritization tool that brings your strategy to life | Jurriaan Kamer

The unintended consequences of photography

Some good points in this photo essay, including photography leading to greater compassion as well as political influence.

Photographs were more than just pictures. While the inventors never intended more than to capture an image, the medium turned into a social force with far-reaching effects.
Source: 5 Unintended Consequences of Photography | The Saturday Evening Post

The 2022 Drone Photo Awards

I had a conversation with my neighbour this week about drones. They were pointing out how invasive they can be, while I was talking about the amazing photographs they can take.

Sure enough, later that day I come across this year’s Drone Photo Award and there’s some absolute stunners in there. The ones of nature are, of course, amazing, but for some reason this one of a Dutch suburb grabbed me as my favourite.

The annual Drone Photo Awards announced its 2022 winners earlier this month, releasing a remarkable collection of images that frame the world’s most alluring landscapes from a rarely-seen view. This year’s contest garnered submissions from 2,624 participants hailing from 116 countries, and the aerial photos capture a vast array of life on Earth, including a caravan of camel shadows crossing the Arabian Desert, a waterlily harvest in West Bengal, and the veiny trails of lava emerging from a fissure near Iceland’s Fagradalsfjall volcano.
Source: From a Volcanic Fissure to a Waterlily Harvest, the 2022 Drone Photo Awards Captures Earth’s Stunning Sights from Above | Colossal

Forbes on federation

This article uses a common format in Forbes where we follow an individual who just happens to have a product to sell. The story is lightly researched, and told in a way that seems to suggest that innovation comes from white guys.

Still, I’m sharing it because it’s a mainstream discussion of ActivityPub and Scuttlebutt, protocols that underpin federated social networks. Linking to places like planetary.social also normalises the true meaning of ‘community’ as an active verb rather than a passive noun, as well as the notion of co-operatives.

While the original, aborted version of a decentralized Twitter was built using the same messaging standard as Google Cloud Messaging and Facebook Chat, a number of technical innovations have recently surfaced, enabling an even more open and decentralized architecture. In January 2018, early blockchain-based social network Steemit exploded to its peak of about a $2 billion market value and Henshaw-Plath took his first job at a blockchain startup, seeking to learn from the inside about the technology that connects people without middlemen.

Though blockchains’ decentralized infrastructures might seem perfect for connecting friends on a social network, Henshaw-Plath was eventually turned off by their reliance on cryptocurrency. “Our feeling was that the primary social interaction should be based on intrinsic motivation,” says Henshaw-Plath. “If you integrate financial incentives into everything, then it can make it into a financial game. And then all of a sudden, people aren’t there because of their human connection and collaboration.” Users, it would seem, agree. Steemit fell 94% from its all-time high to about $107 million today.

Henshaw-Plath started looking for alternatives. “Eventually,” he says, “I discovered a protocol created by this guy who lives on a sailboat in New Zealand.”

That is Dominic Tarr, an eccentric, open-source developer who lives just off the coast of Auckland on a Wharram catamaran named Yes Let’s he found on the side of a road. Tired of being unable to send emails to his friends from his Pacific Ocean location, Tarr wrote software that uses technology similar to Apple’s Airdrop to create a protocol that lets anyone build social networks where information moves like gossip, directly from phone to phone—no internet service provider required.

Entrepreneurs using the protocol get to choose their own business models, their own designs and how their systems function. Users, meanwhile, can move freely from network to network. Tarr called the software Secure Scuttlebutt after the cask that stored water on old sailboats, which is also maritime slang for “gossip,” as in conversations held around a water cooler. “Modern capitalism believes that what people want is convenience,” says Tarr. “But I think what people actually want is a sense of control.”

Scuttlebutt itself isn’t supported by venture capital. Instead, taking a page from the way Tim Berners-Lee funded the creation of the World Wide Web, Scuttlebutt is backed by grants that helped jump-start the process. Similar to a distributed autonomous organization (DAO) that connects groups on a blockchain, there are now hundreds of users who personally donate to the cause and an estimated 30,000 people using one of at least six social networks on the protocol. An estimated 4 million more use the largest social protocol, Mastodon, which supports 60 niche social networks, with a rapidly growing pool of blockchain competitors in the works.

Source: Jack Dorsey’s Former Boss Is Building A Decentralized Twitter | Forbes

Forbes on federation

This article uses a common format in Forbes where we follow an individual who just happens to have a product to sell. The story is lightly researched, and told in a way that seems to suggest that innovation comes from white guys.

Still, I’m sharing it because it’s a mainstream discussion of ActivityPub and Scuttlebutt, protocols that underpin federated social networks. Linking to places like planetary.social also normalises the true meaning of ‘community’ as an active verb rather than a passive noun, as well as the notion of co-operatives.

While the original, aborted version of a decentralized Twitter was built using the same messaging standard as Google Cloud Messaging and Facebook Chat, a number of technical innovations have recently surfaced, enabling an even more open and decentralized architecture. In January 2018, early blockchain-based social network Steemit exploded to its peak of about a $2 billion market value and Henshaw-Plath took his first job at a blockchain startup, seeking to learn from the inside about the technology that connects people without middlemen.

Though blockchains’ decentralized infrastructures might seem perfect for connecting friends on a social network, Henshaw-Plath was eventually turned off by their reliance on cryptocurrency. “Our feeling was that the primary social interaction should be based on intrinsic motivation,” says Henshaw-Plath. “If you integrate financial incentives into everything, then it can make it into a financial game. And then all of a sudden, people aren’t there because of their human connection and collaboration.” Users, it would seem, agree. Steemit fell 94% from its all-time high to about $107 million today.

Henshaw-Plath started looking for alternatives. “Eventually,” he says, “I discovered a protocol created by this guy who lives on a sailboat in New Zealand.”

That is Dominic Tarr, an eccentric, open-source developer who lives just off the coast of Auckland on a Wharram catamaran named Yes Let’s he found on the side of a road. Tired of being unable to send emails to his friends from his Pacific Ocean location, Tarr wrote software that uses technology similar to Apple’s Airdrop to create a protocol that lets anyone build social networks where information moves like gossip, directly from phone to phone—no internet service provider required.

Entrepreneurs using the protocol get to choose their own business models, their own designs and how their systems function. Users, meanwhile, can move freely from network to network. Tarr called the software Secure Scuttlebutt after the cask that stored water on old sailboats, which is also maritime slang for “gossip,” as in conversations held around a water cooler. “Modern capitalism believes that what people want is convenience,” says Tarr. “But I think what people actually want is a sense of control.”

Scuttlebutt itself isn’t supported by venture capital. Instead, taking a page from the way Tim Berners-Lee funded the creation of the World Wide Web, Scuttlebutt is backed by grants that helped jump-start the process. Similar to a distributed autonomous organization (DAO) that connects groups on a blockchain, there are now hundreds of users who personally donate to the cause and an estimated 30,000 people using one of at least six social networks on the protocol. An estimated 4 million more use the largest social protocol, Mastodon, which supports 60 niche social networks, with a rapidly growing pool of blockchain competitors in the works.

Source: Jack Dorsey’s Former Boss Is Building A Decentralized Twitter | Forbes

A philosophical approach to performative language

I don’t know anything about Ariel Pontes, the author of this article, other than seeing that they’re a member of the Effective Altruism community. (Which is a small red flag in and of itself, as it tends to be full of hyper-rationalist solutionist dudes.)

However, what I appreciate about this loooooong article is that Pontes applies philosophical concepts I’ve come across before to talk about the different roles language can play across the political divide.

People are not just tricked into believing falsities anymore, they no longer care about what’s true or false as long as it supports their narratives and hashtags. But can we draw a sharp boundary between smart, rational, objective people, and crazy, fact-denying post-truthers? Or do we all use non-factual language to some extent? What are we really doing when we say things like “meat is murder” or “all lives matter”?

[…]

Most people would probably agree, if asked, that humans are prone to black-and-white thinking, and that this is bad. But few of us actually make as constant conscious effort to avoid this tendency of ours in our daily lives. Our tribal brains are quick to label people as belonging either to our team of that of the enemy, for example, and it’s hard to accept that there are many possibilities in between.

[...]

Once we start seeing language as a tool used to play different games, it becomes natural to ask: what types of games are people playing out there? In his lecture series posthumously published as How To Do Things With Words, J. L. Austin introduces the concept of a “performative utterance” or “speech act”, a sentence that does not describe or “constate” any fact, but performs an action.

[...]

In his lectures about performative utterances, Austin introduces what he calls the descriptive fallacy. This fallacy is committed when somebody interprets a performative utterance as merely descriptive, subsequently dismissing it as false or nonsense when in fact it has a very important role, it’s just that this role is not simply stating facts. If somebody goes on vacation after a stressful period at work and, as they finally lie on their beach chair in their favorite resort with their favorite cocktail in their hands, they say “life is good”, it would be absurd to say “this statement is meaningless because it cannot be empirically verified”. Clearly it is an expression of a state of mind that doesn’t really have a factual dimension at all.

What’s important to emphasize here, however, is that those who attack speech acts as false or meaningless are as guilty as the descriptive fallacy as those who defend their performative utterances on factual grounds, which is regrettably common. People are not usually aware that, besides labelling a statement as “true” or “false”, they can also label it as “purely performative, lacking factual content”. The performative nature of language is not something people are explicitly aware of in general. As a consequence, when a statement is phrased as factual but is confusing and hard to grasp as factually true, our intuitive reaction is to label it as false. On the other hand, if a statement becomes part of our identity as consequence of being used as the slogan of a movement we strongly support, we feel tempted to defend it as factually true even though it might be quite plainly false or factually meaningless.

[...]

Language is complex. A statement can always be interpreted in many ways. In the age of social media, where a tweet can be read by millions of people, it is always possible that somebody will read a malicious insinuation into an genuinely well intended comment. Because of this, it is often helpful to say what you don’t mean. Of course, no matter how much effort we make, somebody might always attack us. This is a reality we have to simply come to terms with. But it doesn’t mean we shouldn’t try.

Source: Performative language. How philosophy of language can help us… | Ariel Pontes

A philosophical approach to performative language

I don’t know anything about Ariel Pontes, the author of this article, other than seeing that they’re a member of the Effective Altruism community. (Which is a small red flag in and of itself, as it tends to be full of hyper-rationalist solutionist dudes.)

However, what I appreciate about this loooooong article is that Pontes applies philosophical concepts I’ve come across before to talk about the different roles language can play across the political divide.

People are not just tricked into believing falsities anymore, they no longer care about what’s true or false as long as it supports their narratives and hashtags. But can we draw a sharp boundary between smart, rational, objective people, and crazy, fact-denying post-truthers? Or do we all use non-factual language to some extent? What are we really doing when we say things like “meat is murder” or “all lives matter”?

[…]

Most people would probably agree, if asked, that humans are prone to black-and-white thinking, and that this is bad. But few of us actually make as constant conscious effort to avoid this tendency of ours in our daily lives. Our tribal brains are quick to label people as belonging either to our team of that of the enemy, for example, and it’s hard to accept that there are many possibilities in between.

[...]

Once we start seeing language as a tool used to play different games, it becomes natural to ask: what types of games are people playing out there? In his lecture series posthumously published as How To Do Things With Words, J. L. Austin introduces the concept of a “performative utterance” or “speech act”, a sentence that does not describe or “constate” any fact, but performs an action.

[...]

In his lectures about performative utterances, Austin introduces what he calls the descriptive fallacy. This fallacy is committed when somebody interprets a performative utterance as merely descriptive, subsequently dismissing it as false or nonsense when in fact it has a very important role, it’s just that this role is not simply stating facts. If somebody goes on vacation after a stressful period at work and, as they finally lie on their beach chair in their favorite resort with their favorite cocktail in their hands, they say “life is good”, it would be absurd to say “this statement is meaningless because it cannot be empirically verified”. Clearly it is an expression of a state of mind that doesn’t really have a factual dimension at all.

What’s important to emphasize here, however, is that those who attack speech acts as false or meaningless are as guilty as the descriptive fallacy as those who defend their performative utterances on factual grounds, which is regrettably common. People are not usually aware that, besides labelling a statement as “true” or “false”, they can also label it as “purely performative, lacking factual content”. The performative nature of language is not something people are explicitly aware of in general. As a consequence, when a statement is phrased as factual but is confusing and hard to grasp as factually true, our intuitive reaction is to label it as false. On the other hand, if a statement becomes part of our identity as consequence of being used as the slogan of a movement we strongly support, we feel tempted to defend it as factually true even though it might be quite plainly false or factually meaningless.

[...]

Language is complex. A statement can always be interpreted in many ways. In the age of social media, where a tweet can be read by millions of people, it is always possible that somebody will read a malicious insinuation into an genuinely well intended comment. Because of this, it is often helpful to say what you don’t mean. Of course, no matter how much effort we make, somebody might always attack us. This is a reality we have to simply come to terms with. But it doesn’t mean we shouldn’t try.

Source: Performative language. How philosophy of language can help us… | Ariel Pontes

Technological Liturgies

A typically thoughtful article from L. M. Sacasas in which they “explore a somewhat eccentric frame by which to consider how we relate to our technologies, particularly those we hold close to our bodies.” It’s worth reading the whole thing, especially if you grew up in a church environment as it will have particular resonance.

Pastoral scene

I would propose that we take a liturgical perspective on our use of technology. (You can imagine the word “liturgical” in quotation marks, if you like.) The point of taking such a perspective is to perceive the formative power of the practices, habits, and rhythms that emerge from our use of certain technologies, hour by hour, day by day, month after month, year in and year out. The underlying idea here is relatively simple but perhaps for that reason easy to forget. We all have certain aspirations about the kind of person we want to be, the kind of relationships we want to enjoy, how we would like our days to be ordered, the sort of society we want to inhabit. These aspirations can be thwarted in any number of ways, of course, and often by forces outside of our control. But I suspect that on occasion our aspirations might also be thwarted by the unnoticed patterns of thought, perception, and action that arise from our technologically mediated liturgies. I don’t call them liturgies as a gimmick, but rather to cast a different, hopefully revealing light on the mundane and commonplace. The image to bear in mind is that of the person who finds themselves handling their smartphone as others might their rosary beads.

[…]

Say, for example, that I desire to be a more patient person. This is a fine and noble desire. I suspect some of you have desired the same for yourselves at various points. But patience is hard to come by. I find myself lacking patience in the crucial moments regardless of how ardently I have desired it. Why might this be the case? I’m sure there’s more than one answer to this question, but we should at least consider the possibility that my failure to cultivate patience stems from the nature of the technological liturgies that structure my experience. Because speed and efficiency are so often the very reason why I turn to technologies of various sorts, I have been conditioning myself to expect something approaching instantaneity in the way the world responds to my demands. If at every possible point I have adopted tools and devices which promise to make things faster and more efficient, I should not be surprised that I have come to be the sort of person who cannot abide delay and frustration.

[…]

The point of the exercise is not to divest ourselves of such liturgies altogether. Like certain low church congregations that claim they have no liturgies, we would only deepen the power of the unnoticed patterns shaping our thought and actions. And, more to the point, we would be ceding this power not to the liturgies themselves, but to the interests served by those who have crafted and designed those liturgies. My loneliness is not assuaged by my habitual use of social media. My anxiety is not meaningfully relieved by the habit of consumption engendered by the liturgies crafted for me by Amazon. My health is not necessarily improved by compulsive use of health tracking apps. Indeed, in the latter case, the relevant liturgies will tempt me to reduce health and flourishing to what the apps can measure and quantify.

Source: Taking Stock of Our Technological Liturgies | The Convivial Society

Technological Liturgies

A typically thoughtful article from L. M. Sacasas in which they “explore a somewhat eccentric frame by which to consider how we relate to our technologies, particularly those we hold close to our bodies.” It’s worth reading the whole thing, especially if you grew up in a church environment as it will have particular resonance.

Pastoral scene

I would propose that we take a liturgical perspective on our use of technology. (You can imagine the word “liturgical” in quotation marks, if you like.) The point of taking such a perspective is to perceive the formative power of the practices, habits, and rhythms that emerge from our use of certain technologies, hour by hour, day by day, month after month, year in and year out. The underlying idea here is relatively simple but perhaps for that reason easy to forget. We all have certain aspirations about the kind of person we want to be, the kind of relationships we want to enjoy, how we would like our days to be ordered, the sort of society we want to inhabit. These aspirations can be thwarted in any number of ways, of course, and often by forces outside of our control. But I suspect that on occasion our aspirations might also be thwarted by the unnoticed patterns of thought, perception, and action that arise from our technologically mediated liturgies. I don’t call them liturgies as a gimmick, but rather to cast a different, hopefully revealing light on the mundane and commonplace. The image to bear in mind is that of the person who finds themselves handling their smartphone as others might their rosary beads.

[…]

Say, for example, that I desire to be a more patient person. This is a fine and noble desire. I suspect some of you have desired the same for yourselves at various points. But patience is hard to come by. I find myself lacking patience in the crucial moments regardless of how ardently I have desired it. Why might this be the case? I’m sure there’s more than one answer to this question, but we should at least consider the possibility that my failure to cultivate patience stems from the nature of the technological liturgies that structure my experience. Because speed and efficiency are so often the very reason why I turn to technologies of various sorts, I have been conditioning myself to expect something approaching instantaneity in the way the world responds to my demands. If at every possible point I have adopted tools and devices which promise to make things faster and more efficient, I should not be surprised that I have come to be the sort of person who cannot abide delay and frustration.

[…]

The point of the exercise is not to divest ourselves of such liturgies altogether. Like certain low church congregations that claim they have no liturgies, we would only deepen the power of the unnoticed patterns shaping our thought and actions. And, more to the point, we would be ceding this power not to the liturgies themselves, but to the interests served by those who have crafted and designed those liturgies. My loneliness is not assuaged by my habitual use of social media. My anxiety is not meaningfully relieved by the habit of consumption engendered by the liturgies crafted for me by Amazon. My health is not necessarily improved by compulsive use of health tracking apps. Indeed, in the latter case, the relevant liturgies will tempt me to reduce health and flourishing to what the apps can measure and quantify.

Source: Taking Stock of Our Technological Liturgies | The Convivial Society

Organisational design: the floor is lava

Coda Hale was, until last year, Principal Engineer at MailChimp. As a result, they seamless mix in words and equations in this article that betray an engineering background.

You shouldn’t let that put you off, though, as this deep dive into organisational design is absolutely worth it. I want to quote two sections in particular, but go and read the whole thing!

The first bit, is the difference between the way that management visualises the structure of an organisation versus how it actually works. Hale explains this as the difference between things that look like they’re working in parallel but which actually sequential:

As with writing highly-concurrent applications, building high-performing organizations requires a careful and continuous search for shared resources, and developing explicit strategies for mitigating their impact on performance.

A commonly applied but rarely successful strategy is using external resources–e.g. consultants, agencies, staff augmentation–as an end-run around contention on internal resources. While the consultants can indeed move quickly in a low-contention environment, integrating their work product back into the contended resources often has the effect of… a quadratic spike in wait times which increases utilization which in turn produces a superlinear spike in wait times… Successful strategies for reducing contention include increasing the number of instances of a shared resource (e.g., adding bathrooms as we add employees) and developing stateless heuristics for coordinating access to shared resources (e.g., grouping employees into teams).

As with heavily layered applications, the more distance between those designing the organization and the work being done, the greater the risk of unmanaged points of contention. Top-down organizational methods can lead to subdivisions which seem like parallel efforts when listed on a slide but which are, in actuality, highly interdependent and interlocking. Staffing highly sequential efforts as if they were entirely parallel leads to catastrophe.

I’ve definitely been in the situation as a consultant multiple times where we’re used as a way to get around organisational inefficiencies. But then when you plug the work back into the organisation, you have to sit and wait until the next bit of work comes along. There’s no rhythm to it, which is annoying for everyone. It’s incoherent.

So the best thing to do, whether you’re working with outside people/orgs or not, is to limit the number of people who need to be consulted as part of processes:

The only scalable strategy for containing coherence costs is to limit the number of people an individual needs to talk to in order to do their job to a constant factor.

In terms of organizational design, this means limiting both the types and numbers of consulted constituencies in the organization’s process. Each additional person or group in a responsibility assignment matrix geometrically increases the area of that matrix. Each additional responsibility assignment in that matrix geometrically increases the cost of organizational coherence.

It’s also worth noting that these pair-wise communications don’t need to be formal, planned, or even well-known in order to have costs. Neither your employee handbook nor your calendar are accurate depictions of how work in the organization is done. Unless your organization is staffed with zombies, members of the organization will constantly be subverting standard operating procedure in order to get actual work done. Even ants improvise. An accurate accounting of these hidden costs can only be developed via an honest, blameless, and continuous end-to-end analysis of the work as it is happening.

This is an article I’ll be coming back to!

Source: Work Is Work | codahale.com

Image: CC BY-NC-SA LockRikard

Organisational design: the floor is lava

Coda Hale was, until last year, Principal Engineer at MailChimp. As a result, they seamless mix in words and equations in this article that betray an engineering background.

You shouldn’t let that put you off, though, as this deep dive into organisational design is absolutely worth it. I want to quote two sections in particular, but go and read the whole thing!

The first bit, is the difference between the way that management visualises the structure of an organisation versus how it actually works. Hale explains this as the difference between things that look like they’re working in parallel but which actually sequential:

As with writing highly-concurrent applications, building high-performing organizations requires a careful and continuous search for shared resources, and developing explicit strategies for mitigating their impact on performance.

A commonly applied but rarely successful strategy is using external resources–e.g. consultants, agencies, staff augmentation–as an end-run around contention on internal resources. While the consultants can indeed move quickly in a low-contention environment, integrating their work product back into the contended resources often has the effect of… a quadratic spike in wait times which increases utilization which in turn produces a superlinear spike in wait times… Successful strategies for reducing contention include increasing the number of instances of a shared resource (e.g., adding bathrooms as we add employees) and developing stateless heuristics for coordinating access to shared resources (e.g., grouping employees into teams).

As with heavily layered applications, the more distance between those designing the organization and the work being done, the greater the risk of unmanaged points of contention. Top-down organizational methods can lead to subdivisions which seem like parallel efforts when listed on a slide but which are, in actuality, highly interdependent and interlocking. Staffing highly sequential efforts as if they were entirely parallel leads to catastrophe.

I’ve definitely been in the situation as a consultant multiple times where we’re used as a way to get around organisational inefficiencies. But then when you plug the work back into the organisation, you have to sit and wait until the next bit of work comes along. There’s no rhythm to it, which is annoying for everyone. It’s incoherent.

So the best thing to do, whether you’re working with outside people/orgs or not, is to limit the number of people who need to be consulted as part of processes:

The only scalable strategy for containing coherence costs is to limit the number of people an individual needs to talk to in order to do their job to a constant factor.

In terms of organizational design, this means limiting both the types and numbers of consulted constituencies in the organization’s process. Each additional person or group in a responsibility assignment matrix geometrically increases the area of that matrix. Each additional responsibility assignment in that matrix geometrically increases the cost of organizational coherence.

It’s also worth noting that these pair-wise communications don’t need to be formal, planned, or even well-known in order to have costs. Neither your employee handbook nor your calendar are accurate depictions of how work in the organization is done. Unless your organization is staffed with zombies, members of the organization will constantly be subverting standard operating procedure in order to get actual work done. Even ants improvise. An accurate accounting of these hidden costs can only be developed via an honest, blameless, and continuous end-to-end analysis of the work as it is happening.

This is an article I’ll be coming back to!

Source: Work Is Work | codahale.com

Image: CC BY-NC-SA LockRikard

Three components of the public sphere

My views on monarchy are, well, that there shouldn’t be one in my country, nor should there be any in the world. This post by Ethan Zuckerman goes into three levels of reaction around the death of Elizabeth II, but more interestingly explains his thinking behind a new experimental course he’s running this semester.

As I thought through the hundreds of ideas I wanted to share over the course of twenty-something lectures, I’ve centered on three core concepts I want to try and get across. The first is simple: democracy requires a robust and healthy public sphere, and American democracy was designed with that public sphere as a core component.

Second – and this one has taken me more time to understand – the public sphere includes at least three components: a way of knowing what’s going on in the world (news), a space for discussing public life, and whatever precursors allow individuals to participate in these discussions. For Habermas’s public sphere, those precursors included being male, wealthy, white, urban and literate… hence the need for Nancy Fraser’s recognition of subaltern counterpublics. Public schooling and libraries are anchored in the idea of enabling people to participate in the public sphere.

The third idea is that as technology and economic models change, all three of these components – the nature of news, discourse, and access – change as well. The obvious change we’re focused on is the displacement of a broadcast public sphere by a highly participatory digital public sphere, but we can see previous moments of upheaval: the rise of mass media with the penny press, the rise of propaganda as broadcast media puts increased control of the public sphere in the hands of corporations and governments.

Source: The Monarchy, the Subaltern and the Public Sphere | Ethan Zuckerman

Three components of the public sphere

My views on monarchy are, well, that there shouldn’t be one in my country, nor should there be any in the world. This post by Ethan Zuckerman goes into three levels of reaction around the death of Elizabeth II, but more interestingly explains his thinking behind a new experimental course he’s running this semester.

As I thought through the hundreds of ideas I wanted to share over the course of twenty-something lectures, I’ve centered on three core concepts I want to try and get across. The first is simple: democracy requires a robust and healthy public sphere, and American democracy was designed with that public sphere as a core component.

Second – and this one has taken me more time to understand – the public sphere includes at least three components: a way of knowing what’s going on in the world (news), a space for discussing public life, and whatever precursors allow individuals to participate in these discussions. For Habermas’s public sphere, those precursors included being male, wealthy, white, urban and literate… hence the need for Nancy Fraser’s recognition of subaltern counterpublics. Public schooling and libraries are anchored in the idea of enabling people to participate in the public sphere.

The third idea is that as technology and economic models change, all three of these components – the nature of news, discourse, and access – change as well. The obvious change we’re focused on is the displacement of a broadcast public sphere by a highly participatory digital public sphere, but we can see previous moments of upheaval: the rise of mass media with the penny press, the rise of propaganda as broadcast media puts increased control of the public sphere in the hands of corporations and governments.

Source: The Monarchy, the Subaltern and the Public Sphere | Ethan Zuckerman

What is ransom capitalism?

Gareth Fearn argues, and I absolutely agree, that governments are so captured by neoliberal thinking that some types of companies or sectors are seen as “too big to fail”. This leads to them being bailed out, which is a capitulation to a kind of ‘ransom capitalism’.

Bailouts are an ideal intervention for a decaying neoliberal politics: they maintain capital flows, rising asset prices and the upwards redistribution of wealth, while supporting the minimum needs of enough of the population to prevent total social breakdown.

British politicians’ responses to soaring energy prices conform to the bailout consensus. Boris Johnson is promising ‘extra cash’, though leaving it up to his successor to work out the details (Liz Truss and Rishi Sunak have so far mostly offered tax cuts). Ed Davey, the leader of the Liberal Democrats, recently proposed an ‘energy furlough scheme’: the government would absorb the cost of rising energy prices and get some of the money back with a windfall tax. Labour soon followed suit, offering a similar cap to energy prices funded through some slightly more creative accounting.

In both cases, energy companies would receive large amounts of public money (at least £29 billion) to enable them to continue charging their customers sums that many cannot afford. With these proposals following so closely behind the pandemic bailouts, which had the backing of all UK parties, we can see there is broad support for such extraordinary interventions with very little thought being given to the causes of the crisis – beyond criticism of the outgoing prime minister’s personality.

[…]

There is an underlying assumption that at some point there will be a return to the ‘normality’ of self-regulating markets of private actors. But bailouts without structural change keep us on the path of ever-increasing losses for the public just to sustain the basics of life, while maintaining a failed market system which is not only generating crises but limiting responses to them – as many nations in the Global South have experienced for decades.

High inflation is not unique to the UK, but the capitulation to the energy companies’ ransom demands seems especially acute here, as is the actual rate of rising costs. France is able to lower prices through its state energy company, Spain and Germany have intervened to reduce the cost of public transport, and many of the proposed measures across Europe involve taking equity in energy companies or stricter regulation. But the UK is too far down the neoliberal rabbit-hole even to countenance such mild social democratic policies.

Source: Ransom Capitalism | London Review of Books

What is ransom capitalism?

Gareth Fearn argues, and I absolutely agree, that governments are so captured by neoliberal thinking that some types of companies or sectors are seen as “too big to fail”. This leads to them being bailed out, which is a capitulation to a kind of ‘ransom capitalism’.

Bailouts are an ideal intervention for a decaying neoliberal politics: they maintain capital flows, rising asset prices and the upwards redistribution of wealth, while supporting the minimum needs of enough of the population to prevent total social breakdown.

British politicians’ responses to soaring energy prices conform to the bailout consensus. Boris Johnson is promising ‘extra cash’, though leaving it up to his successor to work out the details (Liz Truss and Rishi Sunak have so far mostly offered tax cuts). Ed Davey, the leader of the Liberal Democrats, recently proposed an ‘energy furlough scheme’: the government would absorb the cost of rising energy prices and get some of the money back with a windfall tax. Labour soon followed suit, offering a similar cap to energy prices funded through some slightly more creative accounting.

In both cases, energy companies would receive large amounts of public money (at least £29 billion) to enable them to continue charging their customers sums that many cannot afford. With these proposals following so closely behind the pandemic bailouts, which had the backing of all UK parties, we can see there is broad support for such extraordinary interventions with very little thought being given to the causes of the crisis – beyond criticism of the outgoing prime minister’s personality.

[…]

There is an underlying assumption that at some point there will be a return to the ‘normality’ of self-regulating markets of private actors. But bailouts without structural change keep us on the path of ever-increasing losses for the public just to sustain the basics of life, while maintaining a failed market system which is not only generating crises but limiting responses to them – as many nations in the Global South have experienced for decades.

High inflation is not unique to the UK, but the capitulation to the energy companies’ ransom demands seems especially acute here, as is the actual rate of rising costs. France is able to lower prices through its state energy company, Spain and Germany have intervened to reduce the cost of public transport, and many of the proposed measures across Europe involve taking equity in energy companies or stricter regulation. But the UK is too far down the neoliberal rabbit-hole even to countenance such mild social democratic policies.

Source: Ransom Capitalism | London Review of Books

Professional try-hards

I love this article about, variously, work-life balance, the future of work, quiet quitting, and the ridiculousness of Silicon Valley culture. To be honest, I feel very fortunate to not have to put up with any of this bullshit in my day-to-day work.

[T]he future of white-collar work has morphed from an advertiser-friendly thought exercise to an existential question with a daily subset of moral riddles: Is that an illicit midday nap, or is it just work-life balance? Is it really the end of work friends, or is it just that a defensive herd mentality is no longer crucial to getting through the day? Is it worse to work on vacation, or to have a little vacation at work? Is the delivery bot lost in the woods, or is he finally free?

[…]

I’d love to be flip and just say that, at this point in planetary decline, anyone who’s a little too interested in emails and Google Docs basically counts as a try-hard, but there’s a specific category of salaryfolk and company leadership provoking a justifiable kind of scorn. The professional try-hard I’m talking about is someone who, in the year 2022, still earnestly and performatively buys into the white-collar hustle and prides themselves on it. You know this person. They’re a cross between a teacher’s pet and a supply-room narc; if they’re not already a manager, they certainly aim to be one day. While everyone else got with the program that trying hard at work—against a political and national backdrop that feels like daily, endless crisis—is ridiculous, or worse, meaningless, these guys (it’s not exclusively a male thing, of course, but I’m not not being gendered on purpose) haven’t quite gotten with the program.

[…]

What’s clear—and what’s behind the reason that professional try-hards are flailing so fantastically—is that the very concept of corporate competence itself has become a joke. The ideals that white-collar striving is built upon have started to crumble: Imagine believing in true “innovation” in a world where Meta, formerly the most exciting company on earth, is reduced to hitting copy and paste. Imagine still buying into the corporate ladder in any sector where performance evaluations might be rife with racial disparities, or where the executives have essentially admitted on the stand that their entire industry is just a game of roulette. Imagine having faith at all in any idea of “corporate good” when the guy celebrated for years as the “one moral CEO in America” is now the subject of a rape investigation (that CEO has denied the allegations). Just last month, Adam Neumann, the disgraced WeWork founder whose implosion was so well-documented that it got turned into prestige television, reportedly received a $350 million second chance for pretty much the same idea he rode to ruin last time.

Imagine, in other words, believing anyone in charge knows what they’re doing. But okay, sure, sic the productivity-management software on everyone else to make sure we’re not online shopping a touch too much.

Source: The Professional Try-Hard Is Dead, But You Still Need to Return to the Office | Vanity Fair

Professional try-hards

I love this article about, variously, work-life balance, the future of work, quiet quitting, and the ridiculousness of Silicon Valley culture. To be honest, I feel very fortunate to not have to put up with any of this bullshit in my day-to-day work.

[T]he future of white-collar work has morphed from an advertiser-friendly thought exercise to an existential question with a daily subset of moral riddles: Is that an illicit midday nap, or is it just work-life balance? Is it really the end of work friends, or is it just that a defensive herd mentality is no longer crucial to getting through the day? Is it worse to work on vacation, or to have a little vacation at work? Is the delivery bot lost in the woods, or is he finally free?

[…]

I’d love to be flip and just say that, at this point in planetary decline, anyone who’s a little too interested in emails and Google Docs basically counts as a try-hard, but there’s a specific category of salaryfolk and company leadership provoking a justifiable kind of scorn. The professional try-hard I’m talking about is someone who, in the year 2022, still earnestly and performatively buys into the white-collar hustle and prides themselves on it. You know this person. They’re a cross between a teacher’s pet and a supply-room narc; if they’re not already a manager, they certainly aim to be one day. While everyone else got with the program that trying hard at work—against a political and national backdrop that feels like daily, endless crisis—is ridiculous, or worse, meaningless, these guys (it’s not exclusively a male thing, of course, but I’m not not being gendered on purpose) haven’t quite gotten with the program.

[…]

What’s clear—and what’s behind the reason that professional try-hards are flailing so fantastically—is that the very concept of corporate competence itself has become a joke. The ideals that white-collar striving is built upon have started to crumble: Imagine believing in true “innovation” in a world where Meta, formerly the most exciting company on earth, is reduced to hitting copy and paste. Imagine still buying into the corporate ladder in any sector where performance evaluations might be rife with racial disparities, or where the executives have essentially admitted on the stand that their entire industry is just a game of roulette. Imagine having faith at all in any idea of “corporate good” when the guy celebrated for years as the “one moral CEO in America” is now the subject of a rape investigation (that CEO has denied the allegations). Just last month, Adam Neumann, the disgraced WeWork founder whose implosion was so well-documented that it got turned into prestige television, reportedly received a $350 million second chance for pretty much the same idea he rode to ruin last time.

Imagine, in other words, believing anyone in charge knows what they’re doing. But okay, sure, sic the productivity-management software on everyone else to make sure we’re not online shopping a touch too much.

Source: The Professional Try-Hard Is Dead, But You Still Need to Return to the Office | Vanity Fair

Every complex problem has a solution which is simple, direct, plausible — and wrong

This is a great article by Michał Woźniak (@rysiek) which cogently argues that the problem with misinformation and disinformation does not come through heavy-handed legislation, or even fact-checking, but rather through decentralisation of funding, technology, and power.

I really should have spoken with him when I was working on the Bonfire Zappa report.

While it is possible to define misinformation and disinformation, any such definition necessarily relies on things that are not easy (or possible) to quickly verify: a news item’s relation to truth, and its authors’ or distributors’ intent.

This is especially valid within any domain that deals with complex knowledge that is highly nuanced, especially when stakes are high and emotions heat up. Public debate around COVID-19 is a chilling example. Regardless of how much “own research” anyone has done, for those without an advanced medical and scientific background it eventually boiled down to the question of “who do you trust”. Some trusted medical professionals, some didn’t (and still don’t).

[…]

Disinformation peddlers are not just trying to push specific narratives. The broader aim is to discredit the very idea that there can at all exist any reliable, trustworthy information source. After all, if nothing is trustworthy, the disinformation peddlers themselves are as trustworthy as it gets. The target is trust itself.

[…]

I believe that we are looking for solutions to the wrong aspects of the problem. Instead of trying to legislate misinformation and disinformation away, we should instead be looking closely at how is it possible that it spreads so fast (and who benefits from this). We should be finding ways to fix the media funding crisis; and we should be making sure that future generations receive the mental tools that would allow them to cut through biases, hoaxes, rhetorical tricks, and logical fallacies weaponized to wage information wars.

Source: Fighting Disinformation: We’re Solving The Wrong Problems / Tactical Media Room

Every complex problem has a solution which is simple, direct, plausible — and wrong

This is a great article by Michał Woźniak (@rysiek) which cogently argues that the problem with misinformation and disinformation does not come through heavy-handed legislation, or even fact-checking, but rather through decentralisation of funding, technology, and power.

I really should have spoken with him when I was working on the Bonfire Zappa report.

While it is possible to define misinformation and disinformation, any such definition necessarily relies on things that are not easy (or possible) to quickly verify: a news item’s relation to truth, and its authors’ or distributors’ intent.

This is especially valid within any domain that deals with complex knowledge that is highly nuanced, especially when stakes are high and emotions heat up. Public debate around COVID-19 is a chilling example. Regardless of how much “own research” anyone has done, for those without an advanced medical and scientific background it eventually boiled down to the question of “who do you trust”. Some trusted medical professionals, some didn’t (and still don’t).

[…]

Disinformation peddlers are not just trying to push specific narratives. The broader aim is to discredit the very idea that there can at all exist any reliable, trustworthy information source. After all, if nothing is trustworthy, the disinformation peddlers themselves are as trustworthy as it gets. The target is trust itself.

[…]

I believe that we are looking for solutions to the wrong aspects of the problem. Instead of trying to legislate misinformation and disinformation away, we should instead be looking closely at how is it possible that it spreads so fast (and who benefits from this). We should be finding ways to fix the media funding crisis; and we should be making sure that future generations receive the mental tools that would allow them to cut through biases, hoaxes, rhetorical tricks, and logical fallacies weaponized to wage information wars.

Source: Fighting Disinformation: We’re Solving The Wrong Problems / Tactical Media Room

WFH from anywhere

Winter in the UK isn’t much fun, so if we didn’t have kids I would absolutely be working from a different country for part of it. Why not?

This is not a new thing: when I worked at Mozilla (2012-15) I almost moved to Gozo, a little island off Malta, as I could work from anywhere. So long as people are productive, and you can interact with them at times that work for everyone, what’s the problem?

Two-plus years into the pandemic, companies all around the world are starting to ask—and sometimes demand—that their employees return to the office. In response, many employees have resisted, citing reduced commute times, better work-life balance, and a greater ability to concentrate at home.

But for an unknown number of people, there is another reason as well: They can’t come in, because they secretly don’t live in the same state or even country anymore.

The issue is larger than it may seem, and many companies are struggling to deal with “employees relocating themselves to ‘nicer’ places to work without letting the business know,” said Robby Wogan, the CEO of global mobility company MoveAssist. One survey performed on behalf of the HR company Topia found that as many as 40 percent of HR professionals had recently discovered that employees were working outside their home state or country, and that only 46 percent were “very confident” they know where most of their workers are, down from 60 percent just last year.

That uncertainty appears justified. In the same survey, 66 percent of the 1,500 full-time employees surveyed in the U.S. and U.S. said they did not tell human resources about all the dates they worked outside of their state or country, and 94 percent said they believe they should be able to work wherever they want if their work gets done.

Source: Some WFH Employees Have a Secret: They Now Live in Another Country | VICE

Paying it forward

It’s worth clicking through to the Axios summary of some recent research showing that people underestimate the impact of small acts of kindness.

I notice this in my own life: when I’m driving, if another driver smiles and allows me to merge into the queue, I’m more likely to do it to others; if I check in on people and ask how they’re doing, they more likely to do it to me. And so on.

Small and simple, kind gestures have immense, underestimated power.
Source: The outsized power of small acts of kindness

CDNs are not phone books

The notorious website kiwi farms is no longer being protected by Cloudflare’s CDN (Content Delivery Network). This means that it is itself subject to DDoS (Distributed Denial of Service) attacks and other cybersecurity risks.

It’s been a long time coming. I agree with Ryan Broderick’s take on this, that websites are like street corners, and it helps them to be conceptualised as such.

Prince said that Cloudflare’s security services, many of which are free and are used by an estimated 20 percent of the entire internet, should be thought of as a utility. “Just as the telephone company doesn't terminate your line if you say awful, racist, bigoted things, we have concluded in consultation with politicians, policy makers, and experts that turning off security services because we think what you publish is despicable is the wrong policy,” Prince wrote.

Which is a good line. I’m sure people who are old enough to remember when telephones weren’t computers love it. But I’m not really sure it works here. Telephones are not publishing platforms, nor are they searchable public records. Comparing a message board that has around nine million visitors a month to someone saying something racist on the telephone is, actually, nuts.

But, more broadly, I don’t even think this is a free speech issue. Cloudflare isn’t a government entity and it’s not putting Kiwi Farms members in jail. In fact, it seems like some users have done that themselves. A German woman seems to have accidentally exposed her real identity amid the constant migration of the site and now may be charged for cyberstalking. Instead, Cloudflare, a private company, has removed their protection from the site, which allows activists and hackers to DDoS it, taking it down.

[…]

Websites are not similar to telephones. They are not even similar to books or magazines. They are street corners, they are billboards, they are parks, they are shopping malls, they are spaces where people congregate. Just because you cannot see the (hopefully) tens of thousands of other people reading this blog post right now doesn’t mean they’re not there. And that is doubly true for a user-generated content platform. And regardless of the right to free speech and the right to assemble guaranteed in America, if the crowd you bring together in a physical space starts to threaten people, even if they’re doing it in the periphery of your audience, the private security company you hired as crowd control no longer has to support you. To me, it’s honestly just that simple.

Source: A website is a street corner | by Ryan Broderick

Image: Karol Smoczynski | Unsplash

Ad-free urban spaces

I've never understood why we allow so much advertising in our lives. Thankfully, I live in a small town without that much of it, but it's always a massive culture shock when I go to a city. So I very much support this campaign led by Charlotte Gage of Adfree Cities.

Related: over the last few years, I've got my kids into the habit of pressing the mute button during advert breaks on TV. This makes the adverts seem even more ridiculous, allows for conversation, and still allows you to see when the programme you're watching comes back on!

"These ads are in the public space without any consultation about what is shown on them," she says. "Plus they cause light pollution, and the ads are for things people can't afford, or don't need."

Ms Gage is the network director of UK pressure group Adfree Cities, which wants a complete ban on all outdoor corporate advertising. This would also apply to the sides of buses, and on the London Underground and other rail and metro systems.

[...]

Ms Gage says that while there are "ethical issues with junk food ads, pay day loans and high-carbon products [in particular], people would rather see community ads and art rather than have multi-billion dollar companies putting logos and images everywhere".

Source: Should billboard advertising be banned? | BBC News

Ad-free urban spaces

I've never understood why we allow so much advertising in our lives. Thankfully, I live in a small town without that much of it, but it's always a massive culture shock when I go to a city. So I very much support this campaign led by Charlotte Gage of Adfree Cities.

Related: over the last few years, I've got my kids into the habit of pressing the mute button during advert breaks on TV. This makes the adverts seem even more ridiculous, allows for conversation, and still allows you to see when the programme you're watching comes back on!

"These ads are in the public space without any consultation about what is shown on them," she says. "Plus they cause light pollution, and the ads are for things people can't afford, or don't need."

Ms Gage is the network director of UK pressure group Adfree Cities, which wants a complete ban on all outdoor corporate advertising. This would also apply to the sides of buses, and on the London Underground and other rail and metro systems.

[...]

Ms Gage says that while there are "ethical issues with junk food ads, pay day loans and high-carbon products [in particular], people would rather see community ads and art rather than have multi-billion dollar companies putting logos and images everywhere".

Source: Should billboard advertising be banned? | BBC News

You should only ever be busy on purpose

If you’re consistently over-stretched, you’re doing it wrong. And if you’re not doing it wrong, your organisation is.

If you are too busy, why? What are you and your team in pursuit of? If you are too busy because of pressure imposed by someone else, why do they believe they should be able to ask more of you?

Are you too busy because the business has expectations for you to deliver by a certain date regardless of the capacity of your team (who are delivering high quality work and always working on the next most important thing as prioritised by the business)? Then that is a conversation that you need to have with your boss and your stakeholders. That is likely unreasonable.

[…]

You and your team should never be so busy that you can’t do your job properly or that you begin to hate your work. Especially if you’re a leader or a leader-of-leaders, then you should actually (yes you should, I’ll die on this hill) have free time to think alone, and to talk and ideate organically with peers. Contrary to popular belief: back-to-back meetings isn’t a badge of honour, it’s a red flag.

Source: Why are you so busy? | Tom Lingham

Image: DALL-E 2

The art of a cup of tea

There’s something about having a cup of tea that’s very different to having a cup of coffee. They’re both a means to an end, but the ends differ massively.

The point is that even if you are the kind of person who wants to do nothing, the world today will seemingly not leave you alone to your languid contemplation and staring out of the window nothingness. It is unacceptable, it’s bad for the economy, it’s somehow letting the side down. So in my pretty vast experience of being an idler in a world of strivers I have found that you need some sort of prop to handle while doing nothing. This explains the enduring, never-to-be-fully-extinguished appeal of the cigarette break and it’s more wholesome cousin, the subject of today’s discussion, the lovely cup of tea.

[…]

If you are sleep deprived tea will not give you a jolt of fleeting alertness to get through another day and help delay confronting the issue that you aren’t getting enough quality sleep. If you have masses of work and a tight deadline, tea will not give you a ‘Limitless’ style ability to get it done despite the odds. If anything tea could slow you down. I can’t envision a montage in one of those interminable, never-ending TV series where the group of lawyers or programmers or detectives have to pull an all-nighter beginning with the team getting out the single origin Assam and their best China.

[…]

We intuitively know that the tea itself is probably a nothing in and by itself, and that it probably does nothing in and by itself, but that this a nothing we can ritualise and return to as a refuge from the pressures of the day to day. In a world overstuffed with disorder and frantic activity, calm is found not in a location but in a ritual. It is found by enjoying an end-in-itself pleasure that promises nothing but itself. And that’s all it needs to be.

Source: On Tea and the Art of Doing Nothing | Thomas J Bevan

Against 'talkocracy'

Research. Build. Test. Repeat.

Not endless talking and pontification.

Everywhere I look, I see the rise of talkocracy — others have called it the dictatorship of the articulate. Talkers standing in the way of builders; offering we ponder, analyze, investigate, research, dissect, agonize endlessly over plans before we lay a single brick.

[…]

This endless pondering introduces years and years of unnecessary delays. But worse: it kills the will to build. There is nothing builders hate more than endless meetings with people who can’t even spell “CPU.”

You know you’ve lost when they’ve internalized the conservative voices, which can now stop them without even having to try. It’s when your intern has a neat idea for something he could hack together in a few hours — but then thinks, what’s the point?

Source: The Dictatorship of the Articulate | Florent Crivello

Cultivating (your) serendipity (surface)

I used to have a quotation on the wall as a History teacher that said “opportunity is missed by most people because it’s dressed in overalls and looks like work”. It’s been attributed to several people, but it’s the point that’s important: opportunities arrive in life, but you have to be looking for them.

Previously, I’ve called this (on my now defunct Discours.es blog) increasing your serendipity surface. In this post, Rob Miller breaks it down into three parts, which is interesting.

But if serendipity is the result of chance, does that mean it’s out of our control? Are we just at the whims of fate? Can we organise our lives to be more conducive to these serendipitous benefits?

Three factors govern the supply of serendipity in our lives and the extent to which we notice and benefit from that serendipity:

  1. Supply – how many opportunities we encounter
  2. Response – whether we notice those opportunities and how we respond to them
  3. Growth – whether and how we internalise the result of our encounters with serendipity
Our supply of interesting opportunities is certainly within our control. Most straightforwardly, we could deliberately put ourselves into situations of extreme novelty: travelling, for example, or seeking out new people to meet, or reading unfamiliar materials. It’s also possible to introduce randomness into what might otherwise be routine, as the writer Robin Sloan has described in his own writing process. However you do it, putting yourself in front of a steady stream of new things – increasing your supply of novelty – will increase the chances of encountering unexpected benefits.

But we’re also surrounded at all times by unnoticed novelty, which links to the second factor: the extent to which we notice and respond positively to novel situations. There are countless ways to respond poorly to novelty. We can ignore it; we can notice it but greet it with indifference; we can fear it; we can attack it, as we might if it runs counter to our existing beliefs. All of these responses ensure the snuffing out of serendipity. The only response that allows for serendipity is improvisation: embracing novelty and making it a part of what you do.

Source: Cultivating serendipity | Roblog, the blog of Rob Miller

Life product tiers

A bit of fun from xkcd, but with some underlying truth in terms of how people experience life almost as if it were different product tiers.

Source: xkcd: Universe Price Tiers

AI art is, well, still art

Art is a social thing. So it does not surprise me at all that people are upset that an AI-generated artwork won a competition.

However, after spending some time today (and in previous weeks) messing about with Midjourney and DALL-E 2 there’s a real talent to ‘prompt-crafting’. You can create amazing things, but not just by whacking in a bunch of words. Unless you’re very lucky.

Perhaps a ‘competition’ isn’t the best way to show off art. And perhaps selling individual works isn’t the best way to fund it?

A synthetic media artist named Jason Allen entered AI-generated artwork into the Colorado State Fair fine arts competition and announced last week that he won first place in the Digital Arts/Digitally Manipulated Photography category...

Allen used Midjourney—a commercial image synthesis model available through a Discord server—to create a series of three images. He then upscaled them, printed them on canvas, and submitted them to the competition in early August. To his delight, one of the images (titled Théåtre D’opéra Spatial) captured the top prize, and he posted about his victory on the Midjourney Discord server on Friday.

Source: AI wins state fair art contest, annoys humans | Ars Technica

Learning through pathways

This is an interesting post that uses Google Maps as a metaphor for learning. In other words, get from where you are, to where you need to be, using an optimal route.

The author did some research, built pathways based on the findings, and presented them back to users. Who didn’t like them.

A similar approach was used in the Mozilla Discover project around Open Badges which is written up on Badge Wiki. The difference there, I guess, was that people were able to recognise specific inflection points that had meaning for them, and ascribe badges.

My opinion would be that people learn in different ways because of the context they bring to the table. You can rely on certain things to inspire most people, for example, or other things to resonate with some people. But there’s always a bit of experimentation to learning. It’s more like improvisational jazz than a symphony!

So, we do learn through pathways, but those pathways only have certain parts of the journey in common. To use another metaphor, it’s a bit like sharing a bus journey with other passengers for several stops, before getting on another bus (or hitch-hiking, or taking a taxi, or…)

I wanted to check the assumptions I had regarding the generation of paths for the Learning Map. So I scheduled interviews with instructors of these schools who were themselves also famous musicians. Their assumption was that I was interviewing them to create a story about their lives, but I was actually doing something far more interesting, I was deeply listening to the story of what and how they learned, and to the chronological order of their learning journey.

I asked them to describe their musical career, I let them know I wanted to create a timeline of their story, to start at the very beginning, and then to take me step by step through to their successes of today. Then I sat back and started to take notes:

Their first experience with music may have been with their mom who played the guitar, a lead singer they had a crush on, or a drum kit they got as a 5-year-old. They learned some key lessons and their journey kicked off. Over time, they may have learned to sing in Church, or worked in a recording studio. Some went to school, where they were introduced to new ideas from their peers, started a band, or trained underneath a mentor. Many went in completely different directions, they first become a chef, worked on a boat, or started bartending, each experience taught them skills they would later apply to their music. As they shared their experiences, I took notes, not about the events, the characters, or places, but only on the things they learned and when. I was mapping out their learning journeys, step by step, from their first experiences to their current work. I made an effort to cut through the superficial, and get to the heart of the lessons learned, this required the musician to deeply introspect, and was a fascinating experience on its own.

Several weeks later, after they had forgotten about the interviews and I had time to map each story out, I presented it back to them. But not as a story of their lives, instead, as a course, I wanted to run by them and get their professional opinion on. “What do you think of this course” “Do you think this is a good structure for a course?” I asked. They did not know this course was modeled after their own stories, they did not have any reason to tie this course back to the interviews conducted some months back and their responses were resounding: “This is a terribly designed course!” “How could you even think of wasting my time with this”, “Don’t you know that you need to understand X before you learn Y”…

More on this in this blog post about the session we ran at The Badge Summit on designing for recognition. Be sure to click through to the accompanying slide deck and the constellation model approach in slides 16 and 24!

Source: We dont learn through pathways | Dev4X

Personal, portable heating solutions

I read this article when it was published on Low-tech Magazine earlier this year. Given the cost of living crisis is being exacerbated by gas prices this winter, Team Belshaw will be using hot water bottles as personal, portable heating solutions!

A hot water bottle is a sealable container filled with hot water, often enclosed in a textile cover, which is directly placed against a part of the body for thermal comfort. The hot water bottle is still a common household item in some places – such as the UK and Japan – but it is largely forgotten or disregarded in most of the industrialised world. If people know of it, they usually associate it with pain relief rather than thermal comfort, or they consider its use an outdated practice for the poor and the elderly.

[…]

Hot water bottles can be combined with a blanket, which further increases thermal comfort. If I put a blanket over the lower part of my body when seated at my desk, it traps the heat from the bottles and keeps them warm for longer. Even better is a blanket with a hole in the middle to stick your head through – a basic poncho – or a blanket with sleeves. If it’s large enough, it creates a tent-like structure that puts your whole body in the warm microclimate created by the water bottles. Draping long clothes over a personal heat source was a common comfort strategy in earlier times.

Source: The Revenge of the Hot Water Bottle | Low-tech Magazine

Potentially the cheapest way of generating clean energy?

I’m sharing this as an potentially-optimistic vision of another way of creating a lot of energy for the world. As the article states, this particular version might not be it, but sea waves contain a lot of energy…

Solar electricity generation is proliferating globally and becoming a key pillar of the decarbonization era. Lunar energy is taking a lot longer; tidal and wave energy is tantalizingly easy to see; step into the surf in high wave conditions and it's obvious there's an enormous amount of power in the ocean, just waiting to be tapped. But it's also an incredibly harsh and punishing environment, and we're yet to see tidal or wave energy harnessed on a mass scale.

That doesn’t mean people aren’t trying – we’ve seen many tidal energy ideas and projects over the years, and just as many dedicated to pulling in wave energy for use on land. There are a lot of prototypes and small-scale commercial installations either running or under construction, and the sector remains optimistic that it’ll make a significant clean energy contribution in years to come.

[…]

SWEL claims “one single Waveline Magnet will be rated at over 100 MW in energetic environments,” and the inventor and CEO, Adam Zakheos, is quoted in a press release as saying “… we can show how a commercial sized device using our technology will achieve a Levelized Cost of Energy (LCoE) less than 1c€(US$0.01)/kWhr, crushing today’s wave energy industry reference value of 85c€ (US$0.84)/kWh …”

[…]

[T]hese kinds of promises are where these yellow sea monsters start smelling a tad fishy to us. Despite many years of wave tank testing, SWEL says it’s still putting the results together, with “performance & scale-up projections, numerical and techno-financial modeling, feasibility studies and technology performance level” information yet to be released.

[…]

If SWEL delivers on its promises, well, you’re looking at nothing short of a clean energy revolution – one it’s increasingly obvious that the planet desperately needs, even if it comes in the form of yet more plastic floating in the ocean. But with investors lining up to throw money at green energy moonshots, the space has no shortage of bad-faith operators, wishful thinking and inflated expectations. And if SWEL’s many tests had generated the kinds of results that extrapolate to some of the world’s cheapest and cleanest energy, well, we’d expect to see a little more progress, some Gates-level investment flowing in, and more than an apparent sub-10 head count driving this thing along.

Source: Wave-riding generators promise the cheapest clean energy ever | New Atlas

Population ethics

Will MacAskill is an Oxford philosopher. He’s an influential member of the Effective Altruism movement and has a view of the world he calls ‘longtermism’. I don’t know him, and I haven’t read his book, but I have done some ethics as part of my Philosophy degree.

As a parent, I find this review of his most recent book pretty shocking. I’m willing to consider most ideas but utilitarianism is the kind of thing which is super-attractive as a first-year Philosophy student but which… you grow out of?

The review goes more into depth than I can here, but human beings are not cold, calculating machines. We’re emotional people. We’re parents. And all I can say is that, well, my worldview changed a lot after I became a father.

Oxford philosophers William MacAskill and Toby Ord, both affiliated with the university’s Future of Humanity Institute, coined the word “longtermism” five years ago. Their outlook draws on utilitarian thinking about morality. According to utilitarianism—a moral theory developed by Jeremy Bentham and John Stuart Mill in the nineteenth century—we are morally required to maximize expected aggregate well-being, adding points for every moment of happiness, subtracting points for suffering, and discounting for probability. When you do this, you find that tiny chances of extinction swamp the moral mathematics. If you could save a million lives today or shave 0.0001 percent off the probability of premature human extinction—a one in a million chance of saving at least 8 trillion lives—you should do the latter, allowing a million people to die.

Now, as many have noted since its origin, utilitarianism is a radically counterintuitive moral view. It tells us that we cannot give more weight to our own interests or the interests of those we love than the interests of perfect strangers. We must sacrifice everything for the greater good. Worse, it tells us that we should do so by any effective means: if we can shave 0.0001 percent off the probability of human extinction by killing a million people, we should—so long as there are no other adverse effects.

[…]

MacAskill spends a lot of time and effort asking how to benefit future people. What I’ll come back to is the moral question whether they matter in the way he thinks they do, and why. As it turns out, MacAskill’s moral revolution rests on contentious, counterintuitive claims in “population ethics.”

[…]

[W]hat is most alarming in his approach is how little he is alarmed. As of 2022, the ‘Bulletin of Atomic Scientists’ set the Doomsday Clock, which measures our proximity to doom, at 100 seconds to midnight, the closest it’s ever been. According to a study commissioned by MacAskill, however, even in the worst-case scenario—a nuclear war that kills 99 percent of us—society would likely survive. The future trillions would be safe. The same goes for climate change. MacAskill is upbeat about our chances of surviving seven degrees of warming or worse: “even with fifteen degrees of warming,” he contends, “the heat would not pass lethal limits for crops in most regions.”

This is shocking in two ways. First, because it conflicts with credible claims one reads elsewhere. The last time the temperature was six degree higher than preindustrial levels was 251 million years ago, in the Permian-Triassic Extinction, the most devastating of the five great extinctions. Deserts reached almost to the Arctic and more than 90 percent of species were wiped out. According to environmental journalist Mark Lynas, who synthesized current research in ‘Our Final Warning: Six Degrees of Climate Emergency’ (2020), at six degrees of warming the oceans will become anoxic, killing most marine life, and they’ll begin to release methane hydrate, which is flammable at concentrations of five percent, creating a risk of roving firestorms. It’s not clear how we could survive this hell, let alone fifteen degrees.

Source: The New Moral Mathematics | Boston Review

Conversational affordances

I’m one of those people who has to try hard not to over-analyse everything. Therapy has helped a bit, but I still can’t help reflecting on conversations I’ve had with people outside my family.

Why did that conversation go so well? Why was another one boring? Did I talk too much?

That sort of thing.

Which is why I found this article about ‘conversational doorknobs’ and improvisational comedy fascinating.

For me, learning take-and-take suggested a solution not just to songs about Spiderman, but to a scientific mystery. I was in graduate school at the time, running studies aimed at answering the question, “Do conversations end when people want them to?” I watched a stupefying number of conversations unfold, some of them blooming into beautiful repartee (one pair of participants exchanged numbers afterward), others collapsing into awkward silences. Why did some conversations unfurl and others wilt? One answer, I realized, may be the clash of take-and-take vs. give-and-take.

Givers think that conversations unfold as a series of invitations; takers think conversations unfold as a series of declarations. When giver meets giver or taker meets taker, all is well. When giver meets taker, however, giver gives, taker takes, and giver gets resentful (“Why won’t he ask me a single question?”) while taker has a lovely time (“She must really think I’m interesting!”) or gets annoyed (“My job is so boring, why does she keep asking me about it?”).

It’s easy to assume that givers are virtuous and takers are villainous, but that’s giver propaganda. Conversations, like improv scenes, start to sink if they sit still. Takers can paddle for both sides, relieving their partners of the duty to generate the next thing. It’s easy to remember how lonely it feels when a taker refuses to cede the spotlight to you, but easy to forget how lovely it feels when you don’t want the spotlight and a taker lets you recline on the mezzanine while they fill the stage. When you’re tired or shy or anxious or bored, there’s nothing better than hopping on the back of a conversational motorcycle, wrapping your arms around your partner’s waist, and holding on for dear life while they rocket you to somewhere new.

There’s people I interact with on a semi-regular basis for which I, like other people, am just a convenient person to talk at. It can be entertaining for a while, but can get a bit too much. Likewise, there are others where I feel like I have to do most of the talking, and that’s just tiring.

The best thing is when the two of your have a shared interest and you’re willing to take turns in asking questions and opening doorways. I’m not saying that I’m a particularly skilled conversationalist, but having attended a lot of events during my career, I’m better at it now than I used to be.

When done well, both giving and taking create what psychologists call affordances: features of the environment that allow you to do something. Physical affordances are things like stairs and handles and benches. Conversational affordances are things like digressions and confessions and bold claims that beg for a rejoinder. Talking to another person is like rock climbing, except you are my rock wall and I am yours. If you reach up, I can grab onto your hand, and we can both hoist ourselves skyward. Maybe that’s why a really good conversation feels a little bit like floating.

What matters most, then, is not how much we give or take, but whether we offer and accept affordances. Takers can present big, graspable doorknobs (“I get kinda creeped out when couples treat their dogs like babies”) or not (“Let me tell you about the plot of the movie ‘Must Love Dogs’…”). Good taking makes the other side want to take too (“I know! My friends asked me to be the godparent to their Schnauzer, it’s so crazy” “What?? Was there a ceremony?”). Similarly, some questions have doorknobs (“Why do you think you and your brother turned out so different?”) and some don’t (“How many of your grandparents are still living?”). But even affordance-less giving can be met with affordance-ful taking (“I have one grandma still alive, and I think a lot about all this knowledge she has––how to raise a family, how to cope with tragedy, how to make chocolate zucchini bread––and how I feel anxious about learning from her while I still can”).

[…]

A few unfortunate psychological biases hold us back from creating these conversational doorknobs and from grabbing them when we see them. We think people want to hear about exciting stuff we did without them (“I went to Budapest!”) when they actually are happier talking about mundane stuff we did together (“Remember when we got stuck in traffic driving to DC?”). We overestimate the awkwardness of deep talk and so we stick to the boring, affordance-less shallows. Conversational affordances often require saying something at least a little bit intimate about yourself, so even the faintest fear of rejection on either side can prevent conversations from taking off. That’s why when psychologists want to jump-start friendship in the lab, they have participants answer a series of questions that require steadily escalating amounts of self-disclosure (you may have seen this as “The 36 Questions that Lead to Love”).

Source: Good conversations have lots of doorknobs | Experimental History

Lessin's five steps and the coming AI apocalypse

I’m not really on any of the big centralised social networks any more, but I’m interested in the effect they have on society. Apparently there have been calls recently complaining about, and resisting, changes that Instagram has made.

In this post, Ben Thompson cites Sam Lessin, a former Facebook exec, who suggests we’re at step four of a five-step process.

  1. The Pre-Internet ‘People Magazine’ Era
  2. Content from ‘your friends’ kills People Magazine
  3. Kardashians/Professional ‘friends’ kill real friends
  4. Algorithmic everyone kills Kardashians
  5. Next is pure-AI content which beats ‘algorithmic everyone’
There's a bit in this post which I think is a pretty deep insight about human behaviour, identity, and the story we like to tell ourselves. Again, it's Thompson quoting Lessin:
I saw someone recently complaining that Facebook was recommending to them…a very crass but probably pretty hilarious video. Their indignant response [was that] “the ranking must be broken.” Here is the thing: the ranking probably isn’t broken. He probably would love that video, but the fact that in order to engage with it he would have to go proactively click makes him feel bad. He doesn’t want to see himself as the type of person that clicks on things like that, even if he would enjoy it.
So TikTok and other platforms reducing the need for human interaction to deliver 'engaging' content have the capacity to fundamentally change the way we think about the world.

In another, related, post Charles Arthur scaremongers about how AI-created content will overwhelm us:

I suspect in the future there will be a premium on good, human-generated content and response, but that huge and growing amounts of the content that people watch and look at and read on content networks (“social networks” will become outdated) will be generated automatically, and the humans will be more and more happy about it.

In its way, it sounds like the society in Fahrenheit 451 (that’s 233ºC for Europeans) though without the book burning. There’s no need: why read a book when there’s something fascinating you can watch instead?

Quite what effect this has on social warming is unclear. Possibly it accelerates polarisation, but rather like the Facebook Blenderbot, people are just segmented into their own world, and not shown things that will disturb them. Or, perhaps, they’re shown just enough to annoy them and engage them again if their attention seems to be flagging. After all, if you can generate unlimited content, you can do what you want. And as we know, what the companies who do this want is your attention, all the time.

As ever, I don’t think we’re ready for this. Not even close.


Sources: Instagram, TikTok, and the Three Trends | Stratechery by Ben Thompson and The approaching tsunami of addictive AI-created content will overwhelm us | Social Warming by Charles Arthur

Dealing with mental pain

This article is from a series that Arthur C. Brooks has in The Atlantic entitled ‘How to Build a Life’. He includes four bits of advice but I’m sharing this mainly so I can share my own approach to dealing with general background anxiety and existential angst.

First, I found several years ago that taking L-Theanine tablets every day is a gamechanger. I recommend them to anyone who will listen. And then, recently, I’ve found that running almost every day makes a huge difference. I literally can’t be anxious while running.

Man sitting with cast on leg

Wouldn’t it be nice to have a handy tool to blunt everyday mental pain a bit? Not to become numb to life—just to take the edge off, especially when it is interfering with normal life, the way you can swallow a Tylenol when your back hurts. It turns out that there are safe and healthy methods to do exactly this, including taking the same sort of painkiller for what ails your body and your mind. And that’s only the beginning.
Source: A Shortcut for Feeling Just a Little Happier - The Atlantic

The UK is in crisis

I’m writing this outside a coffee shop in Tynemouth. The place is absolutely heaving on a sunny summer’s day, but it’s takeaway only as they can’t get enough staff. Elsewhere, everywhere from postal workers to bin men to lawyers are on strike.

Map of UK with woman in foreground in red dress with arms folded

An editorial in Le Monde comments on the “worst crisis since the 1970” in the UK:

The pre-eminence of ideology over pragmatism – a supposedly British virtue – has already led to the Brexit disaster, and risks prolonging and even worsening the deteriorating situation left by Mr. Johnson, whose lies have widened the divorce between public opinion and politics. An economic crisis and instability could feed the temptation to resort to anti-European and nationalist rhetoric. At a time when threats are mounting across Europe, highlighting the need for strengthened solidarity, the crisis in the United Kingdom is a warning to all its neighbors.

Charlie Stross goes further:
Politics is dominated by an incumbent party who have ruled, except for a 13 year period (during which they were replaced by the Tory-Lite regime of Tony Blair), since 1979—43 years of conservative policies. They're completely out of new ideas, but the next leader of the nation is intent on recycling the same tired nostrums indefinitely, using an astroturfed culture war on wokery as cover rather than trying to address the deep structural problems of a state that has been hollowed out and looted for half a lifetime, so that there is no resilience left in our institutions.

This is the sort of crisis that brings down nations.

Sources: The UK's downturn is a warning for Europe | Le Monde, and The gathering crisis | Charlie's Diary

Image: DALL-E 2

The UK is in crisis

I’m writing this outside a coffee shop in Tynemouth. The place is absolutely heaving on a sunny summer’s day, but it’s takeaway only as they can’t get enough staff. Elsewhere, everywhere from postal workers to bin men to lawyers are on strike.

Map of UK with woman in foreground in red dress with arms folded

An editorial in Le Monde comments on the “worst crisis since the 1970” in the UK:

The pre-eminence of ideology over pragmatism – a supposedly British virtue – has already led to the Brexit disaster, and risks prolonging and even worsening the deteriorating situation left by Mr. Johnson, whose lies have widened the divorce between public opinion and politics. An economic crisis and instability could feed the temptation to resort to anti-European and nationalist rhetoric. At a time when threats are mounting across Europe, highlighting the need for strengthened solidarity, the crisis in the United Kingdom is a warning to all its neighbors.

Charlie Stross goes further:
Politics is dominated by an incumbent party who have ruled, except for a 13 year period (during which they were replaced by the Tory-Lite regime of Tony Blair), since 1979—43 years of conservative policies. They're completely out of new ideas, but the next leader of the nation is intent on recycling the same tired nostrums indefinitely, using an astroturfed culture war on wokery as cover rather than trying to address the deep structural problems of a state that has been hollowed out and looted for half a lifetime, so that there is no resilience left in our institutions.

This is the sort of crisis that brings down nations.

Sources: The UK's downturn is a warning for Europe | Le Monde, and The gathering crisis | Charlie's Diary

Image: DALL-E 2

Development without critique

Hypothes.is is an annotation service. I can’t remember who recommended I follow his annotations, but Chris Aldrich’s gleanings are worth following via RSS.

For example, I never would otherwise have come across this, from a Discord chat room, which got me thinking about roles within networks and communities.

I think this is the interplay where things get lost. There are very few theorizers, and tonnes of enactors. And everyone ends up thinking the enactors are theorizers, but they're not. They're developing specific methods without building up — and especially without critiquing — the underlying theory.
Source: Chris Aldrich | Hypothesis

Working from home

I don’t know anything about the author of this post other than what he’s put on his about page. He doesn’t look very old, and he’s a developer for Just Eat, the food takeaway app. Neither his about page nor this post mention family, which is a massive red flag for me when people are talking about the downsides of working from home.

You see, while he may have problems concentrating, and miss the social element of the office, that’s not true for everyone. It’s particularly not true for those with a family. So I’m posting this as a reminder to myself and others, that context matters.

Much like the effect of the plague in medieval times, one of the effects of the pandemic has been to perturb the power balance between employers and employees. As an employee, I was initially excited by the benefits of working from home, but slowly realised that complete remote working was an alienating experience that has diminished the boundaries between work and leisure.

I want to make a developer-centric argument that the current state of majority remote working is bad, not because it is bad for your company or for your salary but because it is not best for yours and others mental well being.

Source: What Tech Workers Don’t Understand They’ve Lost by WFH | Michael Gomes Vieira

Eddie Jones on how privately educated rugby players 'lack resolve'

It’s no secret that I believe that private schools shouldn’t exist. I’ve explained why so many times over the years I almost don’t know where to link, but try this from 2012, or this from 2019. Ben Werdmuller also shared his thoughts a week ago.

I’m pleased to see this report of comments made by Eddie Jones, England Rugby Union’s head coach on private schools. I think we’re coming to realise, as a society, that more diversity really is better for all of us.

Jones, 62, claimed the pathway produced players who had enjoyed a "closeted life" and lacked "resolve" in a weekend interview with the i newspaper.

[…]

Jones had claimed in his interview that “you are going to have to blow the whole thing up” as the system yielded young players who struggled to lead because “everything’s done for you”.

“When we are on the front foot we are the best in the world,” Jones added. “When we are not on the front foot our ability to find a way to win, our resolve, is not as it should be."

Source: Eddie Jones: England head coach admonished by RFU over private school system criticism | BBC Sport

Mathematical models of evolution

I have no idea if this has since been debunked, but it’s fascinating to me.

Biologist and mathematician D’Arcy Thompson advanced a strange new idea in his 1917 book On Growth and Form: He found that if you draw the outline of an animal or plant on an ordinary Cartesian grid, and then you put the grid through some mathematical transformation (stretching it, for example, so that its squares become rhombuses), very often the resulting shape is that of a related real creature.
Source: A Stretch - Futility Closet

Ethical (open) source (licenses)

As I’ve said recently elsewhere, I don’t think technical projects do a good enough job to proactively defensively license their outputs. This, I’d say, is why we can’t have nice things.

While I agree with the sentiment around ‘ethical source’ models, the philosopher in me would argue that it’s an absolute minefield.

Ethical impulses aren’t new to software. The Free Software Foundation advocates for a “struggle against for-profit corporate control” and against restrictions on users’ freedom to inspect and modify code in the products they buy. It was started after its founder, Richard Stallman, found he was unable to repair his broken printer because he was unable to edit its proprietary code. However, the open-source movement distanced itself from this political stance, instead making the case that open source was good for corporations on “pragmatic, business-case grounds.” But both free and open-source software allow anyone to use code for any purpose.

[…]

So what about developers who don’t want their work to be used to help separate kids from their families or create nonconsensual pornography?

The Ethical Source Movement seeks to use software licenses and other tools to give developers “the freedom and agency to ensure that our work is being used for social good and in service of human rights.” This view emphasizes the rights of developers to have a say in what the fruits of their labor are used for over the rights of any user to use the software for anything. There are a myriad of different licenses: some prohibit software from being used by companies that overwork developers in violation of labor laws, while others prohibit uses that violate human rights or help extract fossil fuels. Is this the thicket Stallman envisions?

[…]

Will people who intend to commit evil acts with software care what a license says or abide by its terms? Well, it depends. While the anonymous users of the deepfake software I studied might still have used it to create nonconsensual porn, even if the license terms prohibited this, Ehmke suggests that corporate misuse is perhaps a more pressing concern: she points to campaigns to prevent software from being used by Palantir and a 2019 report by Amnesty International that raised concerns that the business models of big name technology companies may threaten human rights. Anonymous users on the internet might not care about licenses, but as Ehmke says and my own experience with lawyers in tech companies confirms, “These companies and their lawyers care very much about what a license says.” So while ethical source licenses might not stop all harmful uses, they might stop some.

Source: Can you stop your open-source project from being used for evil? | Stack Overflow Blog

Ethical (open) source (licenses)

As I’ve said recently elsewhere, I don’t think technical projects do a good enough job to proactively defensively license their outputs. This, I’d say, is why we can’t have nice things.

While I agree with the sentiment around ‘ethical source’ models, the philosopher in me would argue that it’s an absolute minefield.

Ethical impulses aren’t new to software. The Free Software Foundation advocates for a “struggle against for-profit corporate control” and against restrictions on users’ freedom to inspect and modify code in the products they buy. It was started after its founder, Richard Stallman, found he was unable to repair his broken printer because he was unable to edit its proprietary code. However, the open-source movement distanced itself from this political stance, instead making the case that open source was good for corporations on “pragmatic, business-case grounds.” But both free and open-source software allow anyone to use code for any purpose.

[…]

So what about developers who don’t want their work to be used to help separate kids from their families or create nonconsensual pornography?

The Ethical Source Movement seeks to use software licenses and other tools to give developers “the freedom and agency to ensure that our work is being used for social good and in service of human rights.” This view emphasizes the rights of developers to have a say in what the fruits of their labor are used for over the rights of any user to use the software for anything. There are a myriad of different licenses: some prohibit software from being used by companies that overwork developers in violation of labor laws, while others prohibit uses that violate human rights or help extract fossil fuels. Is this the thicket Stallman envisions?

[…]

Will people who intend to commit evil acts with software care what a license says or abide by its terms? Well, it depends. While the anonymous users of the deepfake software I studied might still have used it to create nonconsensual porn, even if the license terms prohibited this, Ehmke suggests that corporate misuse is perhaps a more pressing concern: she points to campaigns to prevent software from being used by Palantir and a 2019 report by Amnesty International that raised concerns that the business models of big name technology companies may threaten human rights. Anonymous users on the internet might not care about licenses, but as Ehmke says and my own experience with lawyers in tech companies confirms, “These companies and their lawyers care very much about what a license says.” So while ethical source licenses might not stop all harmful uses, they might stop some.

Source: Can you stop your open-source project from being used for evil? | Stack Overflow Blog

Being busy isn't a badge of honour

If you think I’m sharing this image because my name is Doug and I find the accompanying image amusing then you’d be 100% correct.

I used to think being swamped was a good sign. I’m doing stuff! I’m making progress! I’m important! I have an excuse to make others wait! Then I realized being swamped just means I’m stuck in the default state, like a ball that settled to a stop in the deepest part of an empty pool, the spot where rainwater has collected into a puddle.

Being swamped means probably not getting enough rest, making things more complicated than they need to be, wasting time on petty decisions, and not thinking deeply about important decisions.

Now, I’m impressed by people who are not swamped. They prioritize ruthlessly to separate what’s most important from everything else, think deeply about those most-important things, execute them well to make a big impact, do that consistently, and get others around them to do the same. Damn, that’s impressive!

Source: Being Swamped is Normal and Not Impressive | Greg Kogan

Being busy isn't a badge of honour

If you think I’m sharing this image because my name is Doug and I find the accompanying image amusing then you’d be 100% correct.

I used to think being swamped was a good sign. I’m doing stuff! I’m making progress! I’m important! I have an excuse to make others wait! Then I realized being swamped just means I’m stuck in the default state, like a ball that settled to a stop in the deepest part of an empty pool, the spot where rainwater has collected into a puddle.

Being swamped means probably not getting enough rest, making things more complicated than they need to be, wasting time on petty decisions, and not thinking deeply about important decisions.

Now, I’m impressed by people who are not swamped. They prioritize ruthlessly to separate what’s most important from everything else, think deeply about those most-important things, execute them well to make a big impact, do that consistently, and get others around them to do the same. Damn, that’s impressive!

Source: Being Swamped is Normal and Not Impressive | Greg Kogan

Meta may really be exiting Europe as soon as this year

Well, we can but hope. The backlash from Instagram-obsessed people would be too much for politicians to bear, however…

Meta has—as it must—warned its investors that it’s in deep trouble in Europe. It’s neither a threat nor a bluff, but rather a statement of fact: without a successor to the U.S.-EU Privacy Shield deal, which the EU’s top court nuked a couple of years back, Facebook and Instagram will be forced to pack up and abandon the European market.

Indeed, this uncomfortable reality was made clearer last month, when Ireland’s privacy regulator submitted a draft decision to its EU peers that would ban Facebook and Insta from transferring Europeans’ personal data to the U.S., because there is no longer any legal basis for these transfers to continue.

[…]

I find it astonishing that even Facebook’s critics, let alone the markets, haven’t glommed onto the reality of the situation. I suspect the culprit is a deep-seated notion that Mark Zuckerberg’s all-powerful company can somehow fix this by modifying its legendarily bad privacy behavior—as though it had some brilliant solution hidden up its sleeve, just waiting until the last possible second before pulling it out.

Source: Even Meta’s critics don’t grasp how likely it is that Facebook and Instagram will soon have to exit Europe | Fortune

Image: created using Midjourney

The importance of being yourself

Any article that quotes the Stoic philosopher Epictetus and talks about the importance of being yourself is a winner.

When we are ourselves, we have value. When we are like everyone else…we are fungible. We are replaceable–by definition. We have little value…by definition.

[…]

BE YOU. Be the only one of you in the whole world. Be the red. That’s where the fun is (without having to fake it). That’s where the money is (you can name your price). That’s where the value is (you can’t be replaced).

[…]

Two thousand years before Peter Thiel said that, “competition is for losers,” Epictetus quipped that, “You can always win if you only enter competitions where winning is up to you.”

[…]

Too many people pointlessly enter contests where the outcome is dependent on forces outside their control. They think it’s safer to be like everyone else…when in fact, what they’re really doing is hiding themselves in the chorus, protecting themselves from judgment. They’re less likely to be singled out and laughed at, sure, but they’re guaranteeing that they’ll never really be noticed or appreciated. Theirs becomes the Indian restaurant that will never be great, but it will never be closed. That is the best you can expect when you’re not playing to win…you’re playing not to lose.

Source: This Is The Best Career Decision You Can Possibly Make | Ryan Holiday

Generating a logo using an AI drawing model

A couple of weeks ago, I was experimenting with Midjourney and speculating about machine creativity. This post is interesting if you haven’t tried using an AI drawing model as it talks about what Dan Hon calls ‘prompt engineering’ (a term he doesn’t like). Dan also linked to this fantastic example from Andy Baio.

Everybody has heard about the latest cool thing™, which is DALL·E 2 (henceforth called Dall-e). A few months ago, when the first previews started, it was basically everywhere. Now, a few weeks ago, the floodgates have been opened and lots of people on the waitlist got access - that group included me.

I’ve spent a day playing around with it, learned some basics (like the fact that adding “artstation” to the end of your phrase automatically makes the output much better…), and generated a bunch of (even a few nice-looking) images. In other words, I was already a bit warmed up.

To add some more background, OctoSQL - an open source project I’m developing - is a CLI query tool that let’s you query multiple databases and file formats in a single SQL query. I knew for a while already that its logo should be updated, and with Dall-e arriving, I could combine the fun with the practical.

Source: How I Used DALL·E 2 to Generate The Logo for OctoSQL | Jacob Martin

Algorithmic Anxiety

I listened to a great episode of CBC’s Spark podcast with the excellent Nora Young on what ownership will look like in 2050. One of the contributors talked about what it might look like to be “on the wrong side of the API”. In other words, the person responding to the request, rather than giving it.

We’re already heading towards a dystopia when people are having their behaviour influenced by black box algorithms that we don’t understand. This article talks about shopping on Instagram and listing property on Airbnb, but the point (and the anxiety) is universal.

Only in the middle of the past decade, though, did recommender systems become a pervasive part of life online. Facebook, Twitter, and Instagram all shifted away from chronological feeds—showing messages in the order in which they were posted—toward more algorithmically sequenced ones, displaying what the platforms determined would be most engaging to the user. Spotify and Netflix introduced personalized interfaces that sought to cater to each user’s tastes. (Top Picks for Kyle!) Such changes made platforms feel less predictable and less transparent. What you saw was never quite the same as what anyone else was seeing. You couldn’t count on a feed to work the same way from one month to the next. Just last week, Facebook implemented a new default Home tab on its app that prioritizes recommended content in the vein of TikTok, its main competitor.

Almost every other major Internet platform makes use of some form of algorithmic recommendation. Google Maps calculates driving routes using unspecified variables, including predicted traffic patterns and fuel efficiency, rerouting us mid-journey in ways that may be more convenient or may lead us astray. The food-delivery app Seamless front-loads menu items that it predicts you might like based on your recent ordering habits, the time of day, and what is “popular near you.” E-mail and text-message systems supply predictions for what you’re about to type. (“Got it!”) It can feel as though every app is trying to guess what you want before your brain has time to come up with its own answer, like an obnoxious party guest who finishes your sentences as you speak them. We are constantly negotiating with the pesky figure of the algorithm, unsure how we would have behaved if we’d been left to our own devices. No wonder we are made anxious. In a recent essay for Pitchfork, Jeremy D. Larson described a nagging feeling that Spotify’s algorithmic recommendations and automated playlists were draining the joy from listening to music by short-circuiting the process of organic discovery: “Even though it has all the music I’ve ever wanted, none of it feels necessarily rewarding, emotional, or personal.”

[…]

“Algorithmic anxiety,” however, is the most apt phrase I’ve found for describing the unsettling experience of navigating today’s online platforms. Shagun Jhaver, a scholar of social computing, helped define the phrase while conducting research and interviews in collaboration with Airbnb in 2018. Of fifteen hosts he spoke to, most worried about where their listings were appearing in users’ search results. They felt “uncertainty about how Airbnb algorithms work and a perceived lack of control,” Jhaver reported in a paper co-written with two Airbnb employees. One host told Jhaver, “Lots of listings that are worse than mine are in higher positions.” On top of trying to boost their rankings by repainting walls, replacing furniture, or taking more flattering photos, the hosts also developed what Jhaver called “folk theories” about how the algorithm worked. They would log on to Airbnb repeatedly throughout the day or constantly update their unit’s availability, suspecting that doing so would help get them noticed by the algorithm. Some inaccurately marked their listings as “child safe,” in the belief that it would give them a bump. (According to Jhaver, Airbnb couldn’t confirm that it had any effect.) Jhaver came to see the Airbnb hosts as workers being overseen by a computer overlord instead of human managers. In order to make a living, they had to guess what their capricious boss wanted, and the anxious guesswork may have made the system less efficient over all.

Source: The Age of Algorithmic Anxiety | The New Yorker

Naming heatwaves

I’m hoping other countries follow suit and bring some attention to heatwaves as human-caused extreme weather events.

The world's first named heat wave hit Seville, Spain, this week, pushing temperatures past 110 degrees Fahrenheit and earning the most severe tier in the city's new heat wave ranking system.

Heat wave “Zoe” has brought scorching temperatures to the southern part of the country for the last few days, particularly the region of Andalusia where Seville is located. Even in the evenings, the Spanish meteorological service recorded temperatures that hovered in the mid-80s in some areas — an extra stress on the human body, which relies on cooler nights to recover from high daytime heat.

Source: ‘Zoe’ becomes the world’s first named heat wave | Climatewire

Doomed to live in a Sisyphean purgatory between insatiable desires and limited means

I’m reading The Dawn of Everything: A New History of Humanity by David Graeber and David Wengrow. It’s an eye-opening book in many ways, and upends notions of how we see the way that people used to live.

This article suggests that 15-hour working weeks are the norm in egalitarian cultures. While working hours are steadily declining, we’re still a long way off — primarily because our desires and means are out of kilter.

Charts from various countries showing working hours declining since 1870 for non-agricultural workers

New genomic and archeological data now suggest that Homo sapiens first emerged in Africa about 300,000 years ago. But it is a challenge to infer how they lived from this data alone. To reanimate the fragmented bones and broken stones that are the only evidence of how our ancestors lived, beginning in the 1960s anthropologists began to work with remnant populations of ancient foraging peoples: the closest living analogues to how our ancestors lived during the first 290,000 years of Homo sapiens’ history.

The most famous of these studies dealt with the Ju/’hoansi, a society descended from a continuous line of hunter-gatherers who have been living largely isolated in southern Africa since the dawn of our species. And it turned established ideas of social evolution on their head by showing that our hunter-gatherer ancestors almost certainly did not endure “nasty, brutish and short” lives. The Ju/’hoansi were revealed to be well fed, content and longer-lived than people in many agricultural societies, and by rarely having to work more than 15 hours per week had plenty of time and energy to devote to leisure.

Subsequent research produced a picture of how differently Ju/’hoansi and other small-scale forager societies organised themselves economically. It revealed, for instance, the extent to which their economy sustained societies that were at once highly individualistic and fiercely egalitarian and in which the principal redistributive mechanism was “demand sharing” — a system that gave everyone the absolute right to effectively tax anyone else of any surpluses they had. It also showed how in these societies individual attempts to either accumulate or monopolise resources or power were met with derision and ridicule.

Most importantly, though, it raised startling questions about how we organise our own economies, not least because it showed that, contrary to the assumptions about human nature that underwrite our economic institutions, foragers were neither perennially preoccupied with scarcity nor engaged in a perpetual competition for resources.

For while the problem of scarcity assumes that we are doomed to live in a Sisyphean purgatory, always working to bridge the gap between our insatiable desires and our limited means, foragers worked so little because they had few wants, which they could almost always easily satisfy. Rather than being preoccupied with scarcity, they had faith in the providence of their desert environment and in their ability to exploit this.

Source: The 300,000-year case for the 15-hour week | Financial Times

Finish what you start

This article uses the analogy of a burger chef to show how software teams can be more productive by focusing on a small number of features at a time.

I think this is more widely applicable. The factory production line was designed to make already-designed things with the fewest mistakes. It does not make people happy, nor does it foster creativity.

Figuring out problems is hard. It’s kind of what I do for a living. Having lots of different things on the go at the same time does not improve things, it makes each one worse.

Now that we understand the burger and the software variations of the problem, we can make a recommendation to both cooks and software engineers alike:

Reducing transaction costs enables small batches. Small batches, in turn, reduce average cycle times, diminish risk, and enhance reliability.

You should only start without having finished when transaction costs are high, and it wouldn’t make economic sense to spend time decreasing them, either because you have agreed to a particular delivery date or because you don’t have the capital to invest.

That said, I’d be careful to avoid falling into a situation where “you’re too busy draining the flood to be able to fix the leak”. The earlier you decrease transaction costs, the earlier you’ll be reaping the benefits from having done it.

Source: How finishing what you start makes teams more productive and predictable | Lucas F. Costa

You don't need a personal trainer

On Saturday, my Garmin smartwatch told me that my ‘fitness age’ is now 33.5. This is eight years younger than my chronological age, and apparently as low as I can get it using the Garmin app.

This is not a surprise to me. Covid absolutely battered my lungs from January to March. So I decided to do something about it, and built up to running every single day.

Willpower is necessary to form habits, but then willpower is necessary in life in general. So yes, get a personal trainer as this guy has done. But someone shouting at you to try harder is an extrinsic motivator. What you need to do is to develop intrinsic motivation to go harder and be better.

We all know the benefits of regular exercise, from living longer to better mental clarity. However, it is notoriously difficult advice to digest, especially for someone in their early 20s who hasn't even experienced a real hangover. The gist of the advice being that money and career success will come if you work at it. But prioritise your mental and physical health and your day-to-day work will improve. It's much easier to stay in shape than it is to stagnate and rebuild your fitness. Your 40 year-old self will thank you.

For a long time I’ve know this to be true. During periods of consistent exercise I’ve had more energy and mental clarity throughout the day. My personal outlook on life is generally better as well. Not to mention that outdoor activities with friends are more accessible and less daunting. Despite this, it has still always been a struggle to stay consistent.

A wave of “habit fetishism” has swept through the West in recent years with books like Atomic Habits regularly topping the best seller lists. It’s a tantalising concept as it sells an easy way to “live the life you’ve always wanted”.

It may work for some, but very few people who try these techniques actually “live the life of their dreams”. What keeps fit people going to the gym on a regular basis isn’t wearing their running shoes to bed at night. It’s discipline and accountability.

This brings me to the best investment I’ve ever made: A personal trainer.

Source: The best investment | ᕕ( ᐛ )ᕗ Herman’s blog

Foregrounding externalities

I found this article via the excellent Sentiers, which I support as a member. It discusses the importance of making visible externalities — a term which is reasonably common in literature relating to economics and risk, but not general discourse.

An externality is “an indirect cost or benefit to an uninvolved third party that arises as an effect of another party’s (or parties') activity”. In this case we’re talking about the cost of extracting materials from the ground and shipping them around the world.

The shipping container led to the highly sophisticated supply chains we see today, which has been extremely efficient in making, exploiting and creating a form of global labour and material arbitrage.

It is vital to the relocating and offshoring of production to places where the costs are far lower. But even more importantly it makes the consequences, or ‘externalities’, of production completely invisible to Western consumers.

What if we suddenly decided that we’re going to stop pretending those things don’t happen? What if we embrace the consequences of what it means to manufacture products and to build, and to price its full cost? If the sticker price included the full cost of everything we build, then suddenly making things locally and sourcing materials locally would become much more attractive.

Source: Designing without depletion: Joseph Grima’s non-extractive architecture | Foreground

Slack emboldens the meek

This is a useful article which focuses on the lack of internal Codes of Conduct and community managers within organisations. Performativity in the workplace is a thing, and workplace chat tools can escalate those types of behaviours into new levels of toxicity.

People act differently online, and tools like Slack, while not expressly built to hook users, still make work feel like social media. Emoji reactions and replies provide the same validation as likes and retweets. “I don’t post online anymore because I don’t like being so public, but if I have something fun going on in my life, I will put that into Slack,” said Rebecca Levin, a Program Manager at research startup Maze. And as Ellen Cushing noted in the Atlantic, like Twitter and Reddit, discussions in Slack feel “categorically different, somehow less real.”

Online, everyone is engaged in a digitally-mediated performance. As Erving Goffman wrote, “We are all just actors trying to control and manage our public image.” And the pressure to maintain that image can quickly turn reasonable people into pundits. When news breaks, “there’s this feeling that if I don’t post about it on Twitter, I’m complicit,” said Charlie Warzel, co-author of Out of Office: The Big Problem and Bigger Promise of Working from Home. “You end up weighing in as if you’re some sort of public figure, despite the fact that you’re not.”

Slack emboldens the meek; compared to an all-hands, the ease of posting makes speaking up a lot easier. Anne Helen Petersen, Warzel’s partner and co-author, has found herself in that position, and sees it as a mixed blessing. The freedom is powerful, she said, “but it also opens a portal. It’s just more discourse, right?”

[...]

Leaders often treat Slack as just another tool. But as Godwin’s Law wryly observed, any extended online discussion is a Hitler comparison waiting to happen. “You’re creating a public room where people are empowered to talk back,” said Marketos. “If something starts to blow up in Slack, you need to have an amazing response that’s defensible if it’s screenshotted and shared with a reporter.” While few HR teams are experienced in rapid-response crisis communications, for community managers, “it comes naturally, and it’s very much an unsung part of their skillset.”

Source: The Extremely Online Workplace | by Benjamin Jackson

Discourses of Climate Delay

I came across this and am sharing it to remind myself of all of the ways that people try to avoid the very pressing problem of the climate emergency.

Source: Discourses of Climate Delay | Leolinne

On GitHub Achievements

GitHub is owned by Microsoft, and Microsoft was one of the earlier adopters of the Open Badges standard. So when I saw this announcement about GitHub Achievements, I naturally assumed they’d be Open Badges.

However, I don’t think that’s the case. I think it’s another staging post in the vendor lock-in long game.

Achievements celebrate and showcase your journey on GitHub. You can take a trip down memory lane as you reminisce on some of your earlier work (yes, certain achievements will surface events dating back to the beginning!). You can also share them on social media to show off the new badges you’ve earned. We’ll only ship a few to start, but as we roll out more over time, achievements will begin to paint a clearer picture of you and the work you’re passionate about.
Source: Introducing Achievements: recognizing the many stages of a developer’s coding journey | The GitHub Blog

Teaching about dead white guys in an age of social media

I’m pleased that I completed my formal education and moved out of teaching before social media transformed the world. In this article, Marie Snyder talks about teaching an introductory Philosophy course (the subject of my first degree) and the pushback she’s had from students.

There’s a lot I could write about this which would be uninteresting, so just go and read her article. All I’ll say is that, personally, I still listen to musicians (like Morrissey) whose political views I find abhorrent. Part of diversity is diversifying your own thinking.

It’s important that we scrutinize behaviours. It’s useful to clarify that discrimination or harm of any kind — from former cultural  appropriation to sexual crimes — is not to be tolerated. We should definitely overtly chastise damaging behaviours of people as a means to shift society to evolve down the best timeline. But we are all greater than our worst actions; for instance, Heidegger’s overt anti-semitism doesn’t obliterate his theories of being. His student and lover, Hannah Arendt, is another name potentially requested stricken from syllabi for a collection of racist comments despite her quarrel with her mentor about his bigoted position.

We have to look at ideas, not people, when sifting the wheat from the chaff. Some ideas stand the test of time even if their author is found otherwise wanting. It doesn’t suggest that they’re an honourable person when we find a piece of work worthy of our attention, and it’s not like we’re contributing to their wealth if they’re long dead. We need to bring back a nuanced approach to these works instead of the current dichotomous path of slotting people in a good or bad box.

Source: On Tossing the Canon in a Cannon | 3 Quarks Daily

Social-first searching

I don’t see this as such a weird thing, especially when it comes to food. For example, my wife follows lots of local places on Instagram and will research new places using that app when we travel. I tend to use Google Maps for that kind of thing. Neither of us would start with a regular web search, because context is important.

Even back prior to 2010, I can remember Drew Buddie doing a TeachMeet presentation on ‘Twitter is my Google’. The point is that humans are social creatures. We want recommendations and to see what we could be potentially missing out on…

Nearly 40% of Gen Z prefers searching on TikTok and Instagram over Google Search and Maps, according to Google's internal data first reported by TechCrunch.

Google confirmed this statistic to Insider, saying, “we face robust competition from an array of sources, including general and specialized search engines, as well as dedicated apps."

Source: Nearly Half of Gen Z Prefers TikTok and Instagram Over Google Search | Business Insider

Chromebooks banned in Danish schools

Slowly, and then all at once is how a ‘splinternet’ happens. I’m seeing more and more cases of the EU standing up to so-called Big Tech companies like Google over data processing agreements.

In this case, it’s Denmark’s data protection agency, but I should imagine other European countries might follow suit. There’ll be an uproar, though, because data security and sovereignty aside, Google absolutely nailed it with that operating system.

Denmark is effectively banning Google’s services in schools, after officials in the municipality of Helsingør were last year ordered to carry out a risk assessment around the processing of personal data by Google.

In a verdict published last week, Denmark’s data protection agency, Datatilsynet, revealed that data processing involving students using Google’s cloud-based Workspace software suite — which includes Gmail, Google Docs, Calendar and Google Drive — “does not meet the requirements” of the European Union’s GDPR data privacy regulations.

Specifically, the authority found that the data processor agreement — or Google’s terms and conditions — seemingly allow for data to be transferred to other countries for the purpose of providing support, even though the data is ordinarily stored in one of Google’s EU data centers.

Source: Denmark bans Chromebooks and Google Workspace in schools over data transfer risks | TechCrunch

Productivity is the enemy of creativity

I like the metaphor used in this post of being light a lightbulb: fully one, or off. In fact, not only have I organised my working life to be like this (I can’t work at half pace, and it’s burned me out when I’ve been employed), but this is the advice I give to my kids when they play sports.

Tweet with calendar showing no meetings and only a workout scheduled at 3pm. Text to go with tweet reads 'this is the one true flex'

“Most people in life are dim lights, they're on but they are not bright. Because they are trying to conserve energy. You should make a choice, you are either on or off. There is either GO time or there is relaxing time. Try to be more binary. You have more energy when it’s Go time" - Andrew “Cobra” Tate.

[…]

Naval [Ravikant] famously said: “Productivity is the Enemy of Creativity”

Rest is absolutely critical for high performance. Without it, it’s like revving your engine until it breaks or blows up. We’re in a new world now where our brains power everything. As the Doomberg crew calls it: “The Gig Economy for Brains.”

Naval again says: “Some of the most creative and productive people I have ever met work in multi-week bursts and then have weeks where they just idle with little done. It’s the nature of the human animal.”

All-in and fully energized OR quiet and at rest. There is no in between. This is a key habit for effective work in the modern day. Don’t be a dim light.

Source: Be Like a Light Bulb: The Importance of Resting Ethic | The Hard Fork by Marvin Liao

Spring '83

John Johnston put me onto this via a comment on my personal blog. Spring ‘83 is a protocol developed by Robin Sloan, multi-talented developer, olive farm owner, and author of novels such as Mr. Penumbra’s 24-Hour Bookstore.

The internet these days is much less fun and weird than it used to be, which is sad. Here’s an example of the protocol in action at the site spring83.mozz.us and you can have a play about at The Oakland Follower Sentinel by creating your own keypair (see the blue sidebar!)

For me, the recent resurgence of the email newsletter feels not much like a renaissance, and more like a massing of exhausted refugees in the last reliable shelter. I’m glad we have it; but email cannot be the end of the story, either.

I’m dissembling a bit. The truth is, I reject Twitter, RSS, and email also because … I am hyped up to invent things!

So it came to pass that I found myself dreaming about designs that might satisfy my requirements, while also supporting new interactions, new ways of relating, new aesthetics. At the same time, I read a ton about the early days of the internet. I devoured oral histories; I dug into old protocols.

The crucial spark was RFC 865, published by the great Jon Postel in May 1983. The modern TCP/IP internet had only just come online that January, can you believe it? RFC 865 describes a Quote of the Day Protocol:

A server listens for TCP connections on TCP port 17. Once a connection is established a short message is sent out the connection (and any data received is thrown away). The service closes the connection after sending the quote.
That’s it. That’s the protocol.

I read RFC 865, and I thought about May 1983. How the internet might have felt at that moment. Jon Postel maintained its simple DNS by hand. There was a time when, you wanted a computer to have a name on the internet, you emailed Jon.

There’s a way of thinking about, and with, the internet of that era that isn’t mere nostalgia, but rather a conjuring of the deep opportunities and excitements of this global machine. I’ll say it again. There are so many ways people might relate to one another online, so many ways exchange and conviviality might be organized.

Spring ‘83 isn’t over yet.

Source: Specifying Spring ‘83 | Robin Sloan

No more low-speed fart sounds for Teslas

Here in the UK, I’ve only ever heard electric vehicles make that high-pitched robotic hum at low speeds. However, it seems there was a proposal in the US for car owners to be able to set their own noise.

That turned out not to be a great idea for those who are blind or partially-sighted. It would also lead to a cacophony of noise for regularly-sighted people, to be honest…

Back in 2019, NHTSA introduced a proposed rule-making that would have allowed drivers to “select the sound they prefer from the set of sounds installed in the vehicle.” The idea was an amendment to a previous rule requiring EVs to make fake sounds at low speeds to prevent injuring pedestrians, especially people who are blind or visually impaired. But after soliciting feedback from the industry and consumer groups, the agency says it is scrapping the proposed rule.

“The great majority of the comments on the [notice of proposed rule-making], including those submitted by advocacy organizations for the blind and by people who are blind or who have low vision, did not favor the proposal to allow hybrid and electric vehicles to have an unlimited number of different pedestrian alert sounds,” a spokesperson for NHTSA said. “Most of those comments favored more uniformity, rather than less, in the number and types of alert sounds allowed.”

[...]

Currently, most EVs emit the same robotic hum when operating at low speeds. And NHTSA says that’s fine, just so long as it doesn’t add a bunch of additional sounds that owners can select. Basically, the agency says it wants to prevent a situation where you have tens of thousands of EVs on the road making all sorts of musical sounds or bird noises — or fart sounds, for that matter. (Tesla, I’m looking at you.)

Source: EV owners won’t be able to pick their own low-speed noise after NHTSA scraps proposal - The Verge

Unintended consequences of smart thermostats

It must have been about five years ago when we bought a Nest thermostat. Before that point, the temperature of our house would be a continuous low-level source of friction. Since then, not only has it ceased to be a point of contention, but it’s also saved us money.

This article points out that, while there are really positive benefits of reducing energy usage at scale, there are unintended side effects in terms of spikes at times when renewable energy isn’t available.

Set by default to turn on before dawn, the smart thermostats unintentionally work in concert with other thermostats throughout neighborhoods and regions to prompting inadvertent, widespread energy-demand spikes on the grid.

The smart thermostats are saving homeowners money, but they are also initiating peak demand throughout the network at a bad time of day, according to Cornell engineers in a forthcoming paper in Applied Energy (September 2022.)

[…]

Lee and Zhang investigated “setpoint behavior” and learned that most homeowners use the smart thermostat’s factory-default settings. Evidence showed that residents remain confused about how to operate their thermostats and are often unable to program it, the authors said.

[…]

While the setpoint schedules are designed to achieve the energy-saving benefit, the peak demands are concentrated primarily when renewable energy is unavailable – aggravating the peak demand by nearly 50%, according to the paper.

[…]

Without a tenable way to store energy from renewable sources like solar power, the electric utilities will be unable to supply this peak demand, which prompts fossil-fuel generators to satisfy the power load. “This can offset the greenhouse gas emissions benefit of electrification,” Lee said.

Source: Smart thermostats inadvertently strain electric power grids | Cornell Chronicle

(Machine) Creativity

It is genuinely amazing what you can create these days with an AI model by simply inputting a few words of natural language. Craiyon (formerly DALL·E mini) allows anyone to do this right now, but there’s also previews of much more powerful models that will be available soon.

As Albert Wenger asks, what does this mean for creative people? I don’t think technology ever completely replaces but rather augments. So I think we’ll see even more artists work with AI models to create amazing things.

During my run today, I was thinking about how awesome it would be to generate running music perfectly suited to the route I was going to run. That’s entirely possible if we continue along this trajectory!

Recently we have had several breakthroughs, first starting with large language models that can tell stories, and now with DALL-E2 and midjourney, two models that can generate amazing imagery based on textual input. For example, here is an image “imagined” by midjourney based on the prompt “Sailing across the alps”

It is mind-bending to sit with this image for a while. A machine created it and did so within a space of minutes, yet it is full of imagination and detail and could easily be on the cover of a book or the walls of a museum.

So what does it mean that we now clearly and demonstrably have creative machines?

Source: The Meaning of Machine Creativity | Continuations by Albert Wenger

Personal Publishing Principles 

I really like the approach of coming up with your ‘personal publishing principles’ for your website, blog, and newsletter. This is CJ Chilvers' version, which I discovered via Rebecca Toh. Below are some of my favourites from CJ’s list.

This is the place to try out all the crazy ideas/projects/products I come up with. Only 1 in 100 of them will resonate, so I need a place that feels good to put up 99 failures — at least. You don’t need 100 products necessarily. But you probably need 100 landing pages.

[…]

Fail in public. Try things. Don’t be boring. See what sticks.

[…]

Curation still matters because “it’s not the customer’s job to care.” To paraphrase Dave Pell, Seth Godin, and Hugh MacLeod: no one gives a shit about you or your projects. Bring them something really interesting from all corners of the web and they’ll read next week…maybe.
Source: Personal Publishing Principles | CJ Chilvers

Image: Cris DiNoto

The future is the least renewable resource

Carlos Alvarez Pereira, vice president of the Club of Rome is interviewed by WIRED about a book called The Limits to Growth, published in 1972. Interestingly, he’s both critical of capitalism and confident that a cultural movement “hidden in plain sight” means that we’ll be in a better position than we are now.

The computer modeling made it plain: If people continued to overextract finite resources, pollute on a massive scale, and balloon the human population in an unsustainable way, civilization could collapse within a century. It sounds like that modeling could have been done last week, what with climate change, water shortages, and microplastics corrupting every corner of the Earth. But in fact it dropped in the 1972 book The Limits to Growth, published by the Club of Rome, an international organization of intellectuals founded in 1968.

To mark the book’s 50 year anniversary, WIRED sat down with Alvarez Pereira to talk about how that future is shaping up, what’s changed in the half-century since Limits, and how humanity might correct course. The conversation has been condensed and edited for clarity.

[...]

WIRED: Presumably economists weren't too fond of it because growth is inherent to capitalism. And unchecked growth really, a kind of maniacal, ecologically-destructive growth at all costs that's built into the system.

CAP: What the system has done, as a mechanism to continue with growth at all costs, is actually to burn the future. And the future is the least renewable resource. There is no way that we can reuse the time we had when we started this conversation. And by building up a system which is more debt-driven—where we keep consumption going, but by creating more and more debt—what we're actually doing is burning or stealing the time of people in the future. Because their time will be devoted to repaying the debt.

Source: The Infamous 1972 Report That Warned of Civilization's Collapse | WIRED

Amazon as a dumb pipe

I like this idea from Cory Doctorow, but monopolies tend to like exploiting their monopoly position. Still, it might be a way for Amazon to get around being scrutinised closely by regulators?

But what if buying local was as easy as shopping as Amazon? What if you could buy local while shopping on Amazon?
Source: View a SKU. Let’s Make Amazon Into a Dumb Pipe | by Cory Doctorow | Medium

Ian Bogost on hybrid work

I always enjoy Ian Bogost’s articles for The Atlantic as they’re thought-provoking. In this one, he talks about how ‘hybrid work’ is doomed, mainly because The Office is a construct, a way of organising life and work, and heavily invested in the status quo.

A rational assessment of your time and productivity was never quite at issue, and I think it never will be. Companies have been pulling employees back to work in person irrespective of anyone’s well-being or efficiency. That’s because return-to-office plans are not concerned, in any fundamental way, with workers and their plight or preferences. Rather they serve as affirmations of a superseding value—one that spans every industry of knowledge work. If your boss is nudging you to come back to your cubicle, the policy has less to do with one specific firm than with the whole firmament of office life: the Office, as an institution. The Office must endure! To the office we must go.

This should be obvious, but somehow it is not: The existence of an office is the central premise of office work, and nothing—not even a pandemic—will make it go away.

[...]

Even in the technology sector, where the tools of remote work are manufactured, the Office reigns supreme. Before the pandemic, Big Tech companies doubled down on the sorts of work environments that had been common for almost a century: urban high-rises and suburban office parks. (Think of Microsoft’s campus in Redmond, Washington; Google’s and Facebook’s in Silicon Valley; Apple’s spaceship in Cupertino; and the Salesforce Tower in San Francisco.) Their deluxe office amenities—free food, gymnasiums, medical care, etc.—only underscore this point: The tech industry has a deep investment in the most conservative interpretation of office life.

If the companies that design and build the very foundations for remote work still adhere to the old-fashioned values of the Office, what should we expect from all the rest? It’s still possible that hybridized knowledge work will become the norm, with work-from-home days provided as a perk. But to get there, office workers must organize, and take the goals and power of the Office into account. It does not want to be flexible, and it cares little for efficiency. If the Office makes concessions, they will be minor, or they will take time; hybrid work is not a revolution.

Source: Hybrid Work Is Doomed | The Atlantic

Steaming open the institutional creases

This is a heart-rending article by Maria Farrell, who suffers from Chronic Fatigue Syndrome. She details her experiences for Long Covid suffers, and it’s not easy reading.

I’m including this quotation mainly because she talks about the impact of the Tory government in the UK over the last decade or so. It’s easy to forget that things didn’t use to be like this

I hid for two years in graduate school, the first year in a wonderful and academically undemanding programme with a tiny, lovely class. I wrote an essay about Walter Benjamin and interactive media that winter, and I remember pulling each sentence rather brutally from the morass of my former abilities and piling them on top of each other. Let’s just say the angel of history made sense to me in a way she had not, before. Minute on minute, I could barely make the letters settle into words, forget about forming sentences or ideas, but day on day it turned out I could do it. It just took a higher threshold of discomfort than I’d previously believed manageable, and about eight times longer. I’m so glad I learnt this. The knowledge that impossibly difficult intellectual tasks can be worked through piecemeal – not in darts and dashes of caffeinated brilliance – was not natural to my temperament, and it’s why I can still do things.

It’s a very bourgeois thing to be able to hide out in grad school. I’m always embarrassed when people remark on how many degrees I have. It put me into financial penury for quite a few years, but it felt worth it to still outwardly look like a person who was moving forward in life, not someone whose clock had stopped in August 1998 when I failed to heal from glandular fever. All that is harder now in Britain, as Tories systematically steam open the institutional creases people like me could fold ourselves into, and dismantle the social welfare that would have held many others as they waited to be well. I started off with moderate M.E. and now, much of the time, I would say it is mild.

Source: Settling in for the long haul | Crooked Timber

Steaming open the institutional creases

This is a heart-rending article by Maria Farrell, who suffers from Chronic Fatigue Syndrome. She details her experiences for Long Covid suffers, and it’s not easy reading.

I’m including this quotation mainly because she talks about the impact of the Tory government in the UK over the last decade or so. It’s easy to forget that things didn’t use to be like this

I hid for two years in graduate school, the first year in a wonderful and academically undemanding programme with a tiny, lovely class. I wrote an essay about Walter Benjamin and interactive media that winter, and I remember pulling each sentence rather brutally from the morass of my former abilities and piling them on top of each other. Let’s just say the angel of history made sense to me in a way she had not, before. Minute on minute, I could barely make the letters settle into words, forget about forming sentences or ideas, but day on day it turned out I could do it. It just took a higher threshold of discomfort than I’d previously believed manageable, and about eight times longer. I’m so glad I learnt this. The knowledge that impossibly difficult intellectual tasks can be worked through piecemeal – not in darts and dashes of caffeinated brilliance – was not natural to my temperament, and it’s why I can still do things.

It’s a very bourgeois thing to be able to hide out in grad school. I’m always embarrassed when people remark on how many degrees I have. It put me into financial penury for quite a few years, but it felt worth it to still outwardly look like a person who was moving forward in life, not someone whose clock had stopped in August 1998 when I failed to heal from glandular fever. All that is harder now in Britain, as Tories systematically steam open the institutional creases people like me could fold ourselves into, and dismantle the social welfare that would have held many others as they waited to be well. I started off with moderate M.E. and now, much of the time, I would say it is mild.

Source: Settling in for the long haul | Crooked Timber

Life cannot be organised

Rebecca Toh is not only a fantastic photographer, but also has a wonderful turn of phrase.

In a way writing is a desperate attempt at organising what cannot be organised – life. But we all valiantly try because what is the alternative.
Source: life cannot be organised | rebeccatoh.co

Life cannot be organised

Rebecca Toh is not only a fantastic photographer, but also has a wonderful turn of phrase.

In a way writing is a desperate attempt at organising what cannot be organised – life. But we all valiantly try because what is the alternative.
Source: life cannot be organised | rebeccatoh.co

This bus ain't growing wings

Cory Doctorow: activist, technologist, sci-fi writer and all-round awesome human being has written a powerful article for Locus magazine. He likens the climate emergency to us being collectively trapped on a bus that’s speeding towards a cliff edge.

We’ll all die at the bottom of the canyon, but no-one will yank the wheel, as it would cause the bus to roll and many people to be hurt.

The good news is: climate denial is on the wane. The bad news is: deniers have pivoted to incrementalism: “We’ll fix the climate. Give us a couple decades to phase out oil and gas. Give us a couple decades to replace the cars and retrofit the houses. Give us a couple decades to invent cool direct-air carbon capture systems, or hydrogen cars that work just like gas cars, or to replace our overland aviation routes with high speed rail, or to increase our urban density and swap out cars for subways and buses. Give us a couple decades to keep making money. We’ll get there.”

In other words: “We’re pretty sure we can get some wings on this bus before it goes over the cliff. Keep your hands off the wheel. Someone could get really badly hurt.”

People are already getting really badly hurt, and it’s only going to get worse. We’re poised to break through key planetary boundaries – loss of biosphere diversity, ocean acidification, land poisoning – whose damage will be global, profound and sustained. Once we rupture these boundaries, we have no idea how to repair them. None of our current technologies will suffice, nor will any of the technologies we think we know how to make or might know how to make.

Source: Cory Doctorow: The Swerve | Locus Online

The Digital Dark Ages

The author of this article helps out with computer museums around the world. He talks about how its not just nostalgia which fuels them, but learning about the technological and social context in which the hardware were situated.

He then explains that future historians won’t have much of that context because of DRM, IP laws, and encryption.

To future historians—not just of computing, but of humanity—the current period will be a dark age.

How was Facebook used by students in the 2010s? We cannot show you, that version of Facebook is not hosted anywhere.

What correspondence did Vint Cerf have as president of the ACM with other luminaries of computing industry and research? We do not know; Google will not publish his emails.

What was it like playing Angry Birds on an iPhone 3G? We do not know; Apple is no longer distributing signed receipts for that binary.

What did the British cabinet discuss when they first learned of the Coronavirus pandemic? We do not know; they chatted on a private WhatsApp group.

What books were published analysing the aftermath of the Maidan coup in Ukraine? We do not know; we do not have the keys for the Digital Editions DRM. How was the coup covered in televised news? We do not know; the broadcasters used RealVideo and Windows Media Encoder and we cannot read those files.

Source: The Digital Dark Ages | De Programmatica Ipsum

Criticism vs praise

Like most people, it would seem, I’m sensitive to criticism. Not just that, but even the absence of praise can be problematic. It’s something I’m working on, but this article pointing out that criticism being more connected to the person making the comments than the one receiving them, is helpful.

Whether it's criticism calmly dispensed by a teacher at school, or a cruel comment hurled in the heat of an argument with a friend or lover, we tend to remember criticism far better than positive comments, due to a phenomenon called the negativity bias.

[…]

While a focus on the darker side of the world around us may sound like a depressing prospect, it has helped humans overcome everything from natural disasters to plagues and wars by being better prepared to deal with them (although there is evidence that optimism can also help to protect us from the stress of extreme situations). The human brain evolved to protect our bodies and keep us alive, and has three warning systems to deal with new dangers. There’s the ancient basal ganglia system that controls our fight or flight response, the limbic system which triggers emotions in response to threats to help us understand dangers, and the more modern pre-frontal cortex, which enables us to think logically in the face of threats.

[…]

In some cases, negative remarks from people we love can lead to long-lasting mental wounds and resentment that can cause relationships to break down. Researchers at the University of Kentucky in the US found relationships are seldom saved when partners ignore relationship problems to remain “passively loyal”. “It is not so much the good, constructive things that partners do or do not do for one another that determines whether a relationship works as it is the destructive things that they do or not do in reaction to problems,” they said.

[…]

“We are all sensitive to negative comments in the sense that there are no ‘stronger’ personality traits. Considering the fact that everyone receives negative comments can help us deal with them … and could be a good strategy to protect our own mental health,” she adds. “Another useful strategy could be to consider that comments are more connected to the person who’s making them than the one who’s receiving them."

Source: Why criticism lasts longer than praise | BBC Future

Is our society structured in a way which encourages people to make less than the greatest contribution they could?

Colin Percival is the founder of Tarsnap which is a somewhat-niche cryptographically-secure backup solution. In this post, he replies to a comment he saw that he’s potentially wasting his life on something less important than the world’s biggest problems.

His point, I think, is that starting your own business is the only way these days of being able to do the kind of deep work which people like him find fulfilling. So I guess the question is whether there’s an even better way of structuring society to enable even greater contribution?

First, to dispense with the philosophical argument: Yes, this is my life, and yes, I'm free to use — or waste — it however I please; but I don't think there's anything wrong with asking if this is how my time could be best spent. That applies doubly if the question is not merely about the choices I made but is rather a broader question: Is our society structured in a way which encourages people to make less than the greatest contribution they could?

[…]

In many ways, starting my own company has given me the sort of freedom which academics aspire to. Sure, I have customers to assist, servers to manage (not that they need much management), and business accounting to do; but professors equally have classes to teach, students to supervise, and committees to attend. When it comes to research, I can follow my interests without regard to the whims of granting agencies and tenure and promotion committees: I can do work like scrypt, which is now widely known but languished in obscurity for several years after I published it; and equally I can do work like kivaloo, which has been essentially ignored for close to a decade, with no sign of that ever changing.

[…]

Is there a hypothetical world where I would be an academic working on the Birch and Swinnerton-Dyer conjecture right now? Sure. It’s probably a world where high-flying students are given, upon graduation, some sort of “mini-Genius Grant”. If I had been awarded a five-year $62,500/year grant with the sole condition of “do research”, I would almost certainly have persevered in academia and — despite working on the more interesting but longer-term questions — have had enough publications after those five years to obtain a continuing academic position. But that’s not how granting agencies work; they give out one or two year awards, with the understanding that those who are successful will apply for more funding later.

Source: On the use of a life | Daemonic Dispatches

The future has been foreclosed and the present is intolerable

This is an insightful and enjoyable article about something which I’ve noticed even at my level of gaming. For example, when quickly explaining the controls for Sniper Elite 4 to someone recently, I realised they were almost exactly the same as Red Dead Redemption 2.

That ‘legibility’ is a double-edged sword. It allows players to switch between games quickly and easily, but perhaps mitigates against innovation, experimentation, and getting really deep into a game…

Writing for TANK magazine in 2019, Josh Citarella mused on how WoW Classic tied into Mark Fischer’s idea of the slow cancellation of the future” (aka where are my hoverboards”)...  Cirtrella points to the collapsing gap between items that generate culture and items that can be (nostalgically) reflected upon, especially as The future has been foreclosed [and] the present is intolerable.”

[…]

Said differently, games are forced to be legible to players. This isn’t a call for radical experimentalism but to simply state that the cost to make games (due to a large amount of factors) is steadily increasing, and as such there is a proportionally growing interest by the powers that be that those games turn a profit. With little flex on things like price (proposing games should cost $70, $80, or more leads to general uproar, despite being something that should totally happen), games are forced to internalize this economic burden on the process of production itself.

[…]

It’s here that I introduce the title of this article, something that sounds more thinky than it is - Game Design Mimetics”. If the role of mechanics design in a game is to best serve the content of the game, be legible to the player, and not introduce too much uncertainty into the middle of a production, the simplest answer to what should we do about the design” is to just copy what already works”.

[…]

The past here isn’t looked at as the past, but instead as the metric by which to hold directly against considerations for the present. The constant backwards facing view as the rubric by which to create the future acts as a collapsing mechanism for possibility.

Source: Game Design Mimetics (Or, What Happened To Game Design?) | k-hole

Recalling generative and liberating uses of technology

I found myself using the phrase “the night is darkest before dawn” today. This post from Anne-Marie Scott is certainly an example of that, and I too look forward to a world beyond “today’s dogpile of an internet”.

I remember a time when I got excited about generative and liberating uses of technology, enabling people to bring their whole selves to learning, being able to incorporate their world, their context, their knowledge, and in turn develop new connections, new communities, and new knowledge to further explore and build on these things. I think this is still possible, and I think work around open practices, open pedagogies, ethics of care, and decolonisation point the way towards how to do it in today’s dogpile of an internet.
Source: Hitting the wall and maybe working out how to get back up again | A placid island of ignorance…

The corrosive nature of captalism

I used to think there was no chance of the current system of capital-based society ending within my lifetime.

But now? I’m not so sure. I see influential writers I respect like Seth Godin and (in this case) Warren Ellis talk openly about the harms of capitalism.

And given the crypto collapse following the pandemic perhaps people are slowly coming to realise there’s more to life than money…

money

From a certain perspective, capitalism is the environment into which we are born, and conditions within it are corrosive: we either adapt to those conditions in order to survive — people will always have to be taught how to tend the machines, and it has been said, after all, that humans are the reproductive organs of machines — or build a sturdy environment suit, or we are seriously harmed. Which casts many of us as good little prisoners or effective wasteland scavengers.
Source: A Suit Of Capitalism | WARREN ELLIS LTD

Image: Jorge Salvador

Frozen baby woolly mammoth discovered in Yukon gold fields

Amazing. Look at how perfectly this creature was preserved in the permafrost!

I guess we’ll be seeing a lot more of this kind of thing as the permafrosts melt due to the climate crisis.

The baby woolly mammoth, named Nun cho ga, which means "big baby animal" in the Trʼondëk Hwëchʼin's Hän language, is about 140 cm long, which is a little bit longer than the other baby woolly mammoth that was found in Siberia, Russia, in May 2007.

Zazula thinks Nun cho ga was probably about 30 to 35 days old when she died. Based on the geology of the site, Zazula believes she died between 35,000 and 40,000 years ago.

“So she died during the last ice age and found in permafrost,” said Zazula.

Source: ‘She’s perfect and she’s beautiful’: Frozen baby woolly mammoth discovered in Yukon gold fields | CBC News

Crypto clowns

If you’re at the top of the Ponzi scheme pyramid, you have a vested interest in keeping it going…

Not coincidentally, the companies doing the least reflecting are the ones with their hands deepest in the cookie jar. Part of what spurred on the current crash was a cryptocurrency called TerraUSD, a type of so-called stablecoin designed to more or less equal the value of the U.S. dollar. The whole point of stablecoins is that they’re supposed to be less volatile than other cryptocurrencies, a way of protecting your money while still keeping your chips in the casino. That was the idea, at least: TerraUSD was tied to another cryptocurrency called Luna, and when its value plummeted in early May, investors promptly dumped their TerraUSD. Tokens meant to sell for $1 a pop were suddenly trading for almost nothing, and, according to Bloomberg, $60 billion of investors’ money was zapped away.

[…]

As the wider crypto market has tanked in the weeks since the Terra collapse, other flailing companies have been similarly unwilling to publicly reflect on the damage. The crypto lender Celsius Network made it big by promising yields much higher than those of traditional bank accounts. That approach generated gobs of money when crypto was booming, but apparently it hasn’t fared so well during the downturn. As rumors began to circulate about Celsius’s financial issues, the company’s founder, Alex Mashinsky, dismissed it all as “FUD,” crypto shorthand for “fear, uncertainty, and doubt.” “Do you know even one person who has a problem withdrawing from Celsius?” he tweeted. Just over 24 hours later, the company put a freeze on all withdrawals, locking customers out of their accounts. (The freeze remains in place almost two weeks later.)

[...]

Throughout the industry, there’s a sense from the biggest players in crypto that if we all just keep the faith, traders can effectively spend their way out of the crisis. Cameron Winklevoss, the billionaire co-founder of the crypto exchange Gemini, recently tweeted that the bitcoin dip feels “irrational,” because “the underlying fundamentals, adoption, and infrastructure have never been stronger.” It’s not a question of fundamentals, though; asking people to look more closely at the tech will not somehow end the bear market. A few days ago, Michael Saylor, whose software company, MicroStrategy, has spent billions of dollars acquiring bitcoin, called the cryptocurrency “a lifeboat, tossed on a stormy sea, offering hope to anyone in the world that needs to get off their sinking ship.” But right now, bitcoin is the sinking ship.

Source: Crypto Is Crashing. Have the Crypto Bosses Learned Anything At All? | The Atlantic

Counting the cost of Brexit

Another article about Brexit, after one last week. I think Brexit was a form of economic suicide, but over the weekend I’ve been thinking about the wider perspective.

Not only did we have a huge worldwide economic crash around 15 years ago, but everyone came online with their smartphones around the same time. So we’ve had a lot of revelations and a lot of resetting to do. Perhaps all of this is the tumultuous times before a new form of society?

One can only hope. Britain is going to be screwed no matter what, because we’re disconnected from our main trading and cultural partners.

Queuing trucks

Most of the trade deals with non-EU countries that the UK has signed have been small in their economic effect, and have merely been “rolled over” from identical ones when we were an EU member. Even Jacob Rees-Mogg, the minister for Brexit opportunities, has stopped talking about Brexit and the UK economy, and instead focuses on what he says is the democratic dividend, the winning back of control, and the return of sovereignty. That is not surprising because day by day the economic data is piling up showing the harm that leaving the EU is doing to the nation’s finances.

Johnson and the Vote Leave campaign promised in 2016 that £350m a month would flow back from Brussels because we would stop contributing to EU coffers.

The impression was that there would be no downside. We would thrive outside Europe’s bureaucracy which was strangling our companies with red tape. The huge benefits of the single market – trading freely across borders, with common standards – were never highlighted by Vote Leave, and rarely by the crudely alarmist Remain camp, either.

Only now, with the worst of the pandemic (probably) behind us, and ministers unable to blame Covid, is Brexit reality being laid bare.

Next year the OECD calculates that the UK will record the lowest growth in the G20 with the exception of Russia whose economy is being drained by its war on Ukraine.

Source: ‘What have we done?’: six years on, UK counts the cost of Brexit | The Guardian

Making adulthood more desirable

I definitely feel this at the moment. As a parent, your kids mostly follow what you do rather than what you say, which confers quite a bit of a responsibility about how you choose to live your life…

Young woman with lights

For many, adulthood means trading a life entirely devoted to learning for one in which you only read (maybe) two books a year. It means swapping a full schedule of sports, clubs, and music lessons for having exactly zero hobbies (unless watching Netflix counts). It means going from hanging out with peers for the bulk of each day to (maybe) seeing friends a few hours a month. It means shifting from experiencing plenty of firsts to being stuck in a hamster wheel of thousandths.

[…]

Adulthood means taking on more responsibilities, and in turn, receiving more privileges. Unless we do something worthwhile — fun, interesting, desirable — with those privileges, young people won’t want to apply to the society of grown-ups, and adults won’t be able to wholeheartedly encourage them to join its ranks.

Source: Sunday Firesides: We Need to Make Adulthood More Desirable | The Art of Manliness

Image: Henri Pham

Losing followers, making friends

There’s a lot going on in this article, which I’ve taken plenty of quotations from below. It’s worth taking some time over, especially if you haven’t read Thinking, Fast & Slow (or it’s been a while since you did!)

Social media inherited and weaponised the chronological weblog feed. Showing content based on user activity hooked us in for longer. When platforms discovered anger and anxiety boosts screen time, the battle for our minds was lost.

Till this point the fundamental purpose of software was to support the user’s objectives. Somewhere, someone decided the purpose of users is to support the firm’s objectives. This shift permeates through the Internet. It’s why basic software is subscription-led. It’s why there’s little functional difference between Windows’ telemetry and spyware. It’s why leaving social media is so hard.

Like chronological timelines, users grew to expect these patterns. Non-commercial platforms adopted them because users expect them. While not as optimized as their commercial counterparts, inherited anti-patterns can lead to inherited behaviours.

[…]

In his book Thinking Fast And Slow, Nobel Laureate Daniel Kahneman describes two systems of thought…

[…]

System 1 appears to prioritise speed over accuracy, which makes sense for Lion-scale problems. System 1 even cheats to avoid using System 2. When faced with a difficult question, System 1 can substitute it and answer a simpler one. When someone responds to a point that was never made that could be a System 1 substitution.

[…]

10 Years ago my life was extremely online. I’ve been the asshole so many times I can’t even count. Was I an asshole? Sure, but the exploitation of mental state in public spaces has a role to play. It’s a strange game. The only way to win is not to play.

Commercial platforms are filled with traps, some inherited, many homegrown. Wrapping it in Zuck’s latest bullshit won’t lead to change. Even without inherited dark patterns, behaviours become ingrained. Platforms designed to avoid these patterns need to consider this if exposed to the Dark Forest.

For everything else it’s becoming easier to just stay away. There are so many private and semi-private spaces far from the madding crowd. You just need to look. I did. I lost followers, but made friends.

Source: Escaping The Web’s Dark Forest | by Steve Lord

The omnishambles of Brexit

The UK is a pretty bad place to live at the moment. Except for the US, and well a lot of other places. I guess what I’m saying is that things are pretty bad politically and in terms of economically, but then the rest of the world is pretty screwed as well.

Union Flag with arrows going in different directions

Britain today is a poor and divided country. Parts of London and the southeast of England might be among the wealthiest places on the planet, but swaths of northern England, Wales, Scotland, and Northern Ireland are among Western Europe’s poorest. Barely a decade ago, the average Brit was as wealthy as the average German. Now they are about 15 percent poorer—and 30 percent worse off than the typical American.

[…]

In the 2016 Brexit referendum and then in the 2019 general election, Johnson offered voters the chance to “take back control” of their destiny, to rebalance the country and to pull it together again. On both occasions, he won.

Six years on, however, we can safely say his project is failing. His government is busy trying to wrest back more control rather than exercising what it has regained. It has not united the country. It has not even begun to level it up.

The truth is, this government won’t accomplish any of that. Until Britain stops trying to restore a vanished past—whether the one imagined by its pro-Brexit Leavers or its anti-Brexit Remainers—and begins to construct a viable future, the country as a whole never will.

Source: What Brexit Promised, and Boris Johnson Failed to Deliver | The Atlantic

Moonshine-enabling cow shoes

The Sunday Surfers (a name my group of friends give to ourselves when playing PlayStation) came across an abandoned moonshine still in the game Red Dead Redemption 2 last weekend. That possibly primed my brain to find this random article even more interesting than I did already!

The shoe was described in a Florida newspaper in 1922 which led to many people, including the authorities, knowing a great deal about the moonshiners’ ingenious way of preventing detection. The knowledge of the shoe didn’t immediately stop its use nor did it stop the moonshiners who just continued to think of new ways to evade police and get their product to an increasingly thirsty public.
Source: Moonshiners Wore Special Shoes To Evade the Law During Prohibition

GNOME &lt;3

I’m a big fan of GNOME as well. Although configurability is important, starting from a basis of opinionated design leads to better results, I think.

There are people who're used to the traditional desktop, taskbar at the bottom, application menus, desktop icons and alike. There are minimalists who build their desktops essentially from scratch using tiling or floating window managers. Then there people who don't really care about what they're using and they tend to stick with whatever came with their system. I'm neither one of those (or at least, not anymore). I happen to agree with Gnome's opinionated desktop philosophy...

[…]

I keep coming back to Gnome and it never ceases to amaze how quickly I can start being productive in it. That’s what a desktop is supposed to do, get out of your way as much as possible while providing great features to facilitate that. It’s very much opinionated about its design and experience, but you shouldn’t fight it. Learn to embrace Gnome for what it is, a beautiful, if somewhat different desktop for developers and regular users alike.

Source: Gnome, the opinionated desktop environment | Dušan’s blog

Worker-owned co-op federation

Sion Whellens helped us set up WAO six years ago, and he’s quoted in this piece about a new worker co-op federation.

There’s been a feeling for a while that Coops UK only really represents large co-operatives such as food co-ops. So this new organisation, of which we will be a member, should be much better at giving worker-owned co-ops a voice.

The vision of the organisation (the name is still under discussion) is to bring together an alliance of people and organisations “with an explicit focus on worker issues, worker-led organising, social solidarity, and economic justice”.

[…]

The new federation is starting out small – there are around 400 worker co-ops in the UK – but is receiving support and guidance from other federations worldwide, including the US Federation of Worker Cooperatives. “USFWC is a relatively new organisation that has managed to pull together something that works really effectively with little core resource in terms of membership subscriptions. They don’t have a vast amount of rich work co-ops to fund it, so they have to pull resources together in different ways. We can learn from that,” says Mr Whellens.

[…]

Co-operatives UK has been a home for worker co-ops for 20 years, but Mr Whellens believes that over the last decade, there has been “a growing realisation that it can’t really speak about worker co-operation authentically, or develop the worker co-op-specific resources we really need”.

[…]

“There has been a progressive loss of a distinctive voice and service for worker co-ops and we reached a point where we realised we need an independent voice and an independent network that can be more agile, less bureaucratic, more focused on the primary audience for worker co-operation – which is workers.”

Source: New federation planned for worker co-ops in the UK | Co-operative News

Psycho-Geography 

This is incredible. I want to see it!

Each concrete slab in the Cretto di Burri measures between ten and twenty meters on each side and stands at around 1.6 meters tall. The enormous yet walkable fissures in the concrete mirror the old town’s streets and corridors, reconjuring spatial memories of the destroyed city while marking its status as uninhabitable ruins. In Burri’s imagination, the cracked landscapes of Death Valley that had served as inspiration for his work functioned as a kind of psycho-geography, suggesting the violence and trauma of fascist rule and industrialized warfare that he had experienced as an Italian citizen living through both World Wars. In similar fashion, the cracked white concrete of the Cretto di Burri memorializes and reifies the trauma and grief of the Belice earthquake, with the fissures marking not just the literal roads and streets of the original town but also the violence done to the land, people, and profoundly to the cultural memory of the site.

The white concrete, as a common urban construction material, suggests the pale corpse of the lost city, while the textures and fissures marking the presence and memory of the old city reveal the futility of erasing and moving forward on a psycho-geographic tabula rasa. Altogether, the Cretto di Burri beautifully responds to a moment of profound cultural grief through its pared-down, yet highly suggestive form and materiality.

Source: The Psycho-Geography of the Cretto di Burri | ArchDaily

Abandoned places

We didn’t have time to go and see the bay with lots of abandoned hotels near Dubrovnik when we were in Croatia recently. But there’s definitely something fascinating about faded glamour and abandoned places.

Though apocalyptic, there's something beautiful about abandoned places. The clocks have stopped ticking and there's not a soul in sight, but the shell of what used to be remains. Abandoned places show us what happens without consistent human upkeep—and perhaps what could even happen to the places we love and frequent. These spots are haunting, and there is a mysterious beauty in neglect. The following locations (albeit somewhat weathered over time), are some of the most striking we've ever seen. Read on to see the most beautiful abandoned places in the world—and learn their backstories. You'll almost feel voyeuristic looking at them, like you're witnessing a very intimate piece of someone else's life.
Source: 54 Most Beautiful Abandoned Places | House Beautiful

One sentence per line

This is spectacularly simple advice from Derek Sivers. I immediately used the approach after reading this article for a script I was writing for a screencast and it really helped!

My advice to anyone who writes: Try writing one sentence per line. I’ve been doing it for twenty years, and it improved my writing more than anything else.

New sentence? Hit [Enter]. New line.

Not publishing one sentence per line, no. Write like this for your eyes only. HTML or Markdown combine separate lines into one paragraph.

Source: writing one sentence per line | Derek Sivers

Travelling light

There’s some good tips about travelling light and the kind of gear to buy, which trade-offs, to make, etc. in this guide. Interestingly, it’s from one of the founders of Ethereum, Vitalik Buterin. I’m not sure what I think of him, to be honest, but this guide is useful, nevertheless.

I have lived as a nomad for the last nine years, taking 360 flights travelling over 1.5 million kilometers (assuming flight paths are straight, ignoring layovers) during that time. During this time, I've considerably optimized the luggage I carry along with me: from a 60-liter shoulder bag with a separate laptop bag, to a 60-liter shoulder bag that can contain the laptop bag, and now to a 40-liter packpage that can contain the laptop bag along with all the supplies I need to live my life.

[…]

As a point of high-level organization, notice the bag-inside-a-bag structure. I have a T-shirt bag, an underwear bag, a sock bag, a toiletries bag, a dirty-laundry bag, a medicine bag, a laptop bag, and various small bags inside the inner compartment of my backpack, which all fit into a 40-liter Hynes Eagle backpack. This structure makes it easy to keep things organized.

[…]

As you might have noticed, a key ingredient in making this work is to be a USBC maximalist. You should strive to ensure that every single thing you buy is USBC-friendly. Your laptop, your phone, your toothbrush, everything. This ensures that you don’t need to carry any extra equipment beyond one charger and 1-2 charging cables. In the last ~3 years, it has become much easier to live the USBC maximalist life; enjoy it!

Source: My 40-liter backpack travel guide

The internet is broken because the internet is a business

I ended up cancelling my Verso books subscription because I was overwhelmed with the number of amazing books coming out every month. This looks like one to keep an eye out for.

Several decades into our experiment with the internet, we appear to have reached a crossroads. The connection that it enables and the various forms of interaction that grow out of it have undoubtedly brought benefits. People can more easily communicate with the people they love, access knowledge to keep themselves informed or entertained, and find myriad new opportunities that otherwise might have been out of reach.

But if you ask people today, for all those positive attributes, they’re also likely to tell you that the internet has several big problems. The new Brandeisian movement calling to “break up Big Tech” will say that the problem is monopolization and the power that major tech companies have accrued as a result. Other activists may frame the problem as the ability of companies or the state to use the new tools offered by this digital infrastructure to intrude on our privacy or restrict our ability to freely express ourselves. Depending on how the problem is defined, a series of reforms are presented that claim to rein in those undesirable actions and get companies to embrace a more ethical digital capitalism.

There’s certainly some truth to the claims of these activists, and aspects of their proposed reforms could make an important difference to our online experiences. But in his new book ‘Internet for the People: The Fight for Our Digital Future’, Ben Tarnoff argues that those criticisms fail to identify the true problem with the internet. Monopolization, surveillance, and any number of other issues are the product of a much deeper flaw in the system.

“The root is simple,” writes Tarnoff: “The internet is broken because the internet is a business.”

Source: The Privatized Internet Has Failed Us | Jacobin

The ultimate act of self-denial

This is absolutely wild.

Scattered throughout northern Japan are over two dozen mummified Japanese monks known as sokushinbutsu. Followers of shugendō, an ancient form of Buddhism, the monks died in the ultimate act of self-denial.

For three years, the priests would eat a special diet consisting only of nuts and seeds, while taking part in a regimen of rigorous physical activity that stripped them of their body fat. They then ate only bark and roots for another three years and began drinking a poisonous tea made from the sap of the urushi tree, normally used to lacquer bowls. This caused vomiting and a rapid loss of bodily fluids, and—most importantly—it killed off any maggots that might cause the body to decay after death. Finally, a self-mummifying monk would lock himself in a stone tomb barely larger than his body, wherein he would not move from the lotus position. His only connection to the outside world was an air tube and a bell. Each day, he rang a bell to let those outside know that he was still alive. When the bell stopped ringing, the tube was removed and the tomb sealed.

Source: Sokushinbutsu of Dainichibou Temple | Atlas Obscura

Muting the American internet

This is a humorous article, but one with a point.

[W]e need a way to mute America. Why? Because America has no chill. America is exhausting. America is incapable of letting something be simply funny instead of a dread portent of their apocalyptic present. America is ruining the internet.

[…]

The greatest trick America’s ever pulled on the subjects of its various vassal states is making us feel like a participant in its grand experiment. After all, our fate is bound to the American empire’s whale fall. My generation in particular is the first pure batch of Yankee-Yobbo mutoids: as much Hank Hill as we are Hills Hoist (look it up!), as familiar with the Supreme Court Justices as we are with the judges on Master Chef, as comfortable in Frasier’s Seattle or Seinfeld’s Upper West Side as we are in Ramsay Street or Summer Bay.

[…]

I should not know who Pete Buttigieg is. In a just world, the name Bari Weiss would mean as much to me as Nordic runes. This goes for people who actually might read Nordic runes too. No Swede deserves to be burdened with this knowledge. No Brazilian should have to regularly encounter the phrase “Dimes Square.” To the rest of the vast and varied world, My Pillow Guy and Papa John should be NPCs from a Nintendo DS Zelda title, not men of flesh and bone, pillow and pizza. Ted Cruz should be the name of an Italian pornstar in a Love Boat porn parody. Instead, I’m cursed to know that he is a senator from Texas who once stood next to a butter sculpture of a dairy cow and declared that his daughter’s first words were “I like butter.”

Source: I Should Be Able to Mute America | Gawker

Muting the American internet

This is a humorous article, but one with a point.

[W]e need a way to mute America. Why? Because America has no chill. America is exhausting. America is incapable of letting something be simply funny instead of a dread portent of their apocalyptic present. America is ruining the internet.

[…]

The greatest trick America’s ever pulled on the subjects of its various vassal states is making us feel like a participant in its grand experiment. After all, our fate is bound to the American empire’s whale fall. My generation in particular is the first pure batch of Yankee-Yobbo mutoids: as much Hank Hill as we are Hills Hoist (look it up!), as familiar with the Supreme Court Justices as we are with the judges on Master Chef, as comfortable in Frasier’s Seattle or Seinfeld’s Upper West Side as we are in Ramsay Street or Summer Bay.

[…]

I should not know who Pete Buttigieg is. In a just world, the name Bari Weiss would mean as much to me as Nordic runes. This goes for people who actually might read Nordic runes too. No Swede deserves to be burdened with this knowledge. No Brazilian should have to regularly encounter the phrase “Dimes Square.” To the rest of the vast and varied world, My Pillow Guy and Papa John should be NPCs from a Nintendo DS Zelda title, not men of flesh and bone, pillow and pizza. Ted Cruz should be the name of an Italian pornstar in a Love Boat porn parody. Instead, I’m cursed to know that he is a senator from Texas who once stood next to a butter sculpture of a dairy cow and declared that his daughter’s first words were “I like butter.”

Source: I Should Be Able to Mute America | Gawker

Morality, responsibility, and (online) information

This is a useful article in terms of thinking about the problems we have around misinformation online. Yes, we have a responsibility to be informed citizens, but there are structural issues which are actively working against that.

How might we alleviate our society’s misinformation problem? One suggestion goes as follows: the problem is that people are so ignorant, poorly informed, gullible, irrational that they lack the ability to discern credible information and real expertise from incredible information and fake expertise.

[…]

This view places the primary responsibility for our current informational predicament – and the responsibility to mend it – on individuals. It views them as somehow cognitively deficient. An attractive aspect of this view is that it suggests a solution (people need to become smarter) directly where the problem seems to lie (people are not smart). Simply, if we want to stop the spread of misinformation, people need to take responsibility to think better and learn how to stop spreading it. A closer philosophical and social scientific look at issues of responsibility with regard to information suggests that this view is mistaken on several accounts.

[…]

Even if there was a mass willingness to accept accountability, or if a responsibility could be articulated without blaming citizens, there is no guarantee that citizens would be successful in actually practising their responsibility to be informed. As I said, even the best intentions are often manipulated. Critical thinking, rationality and identifying the correct experts are extremely difficult things to practise effectively on their own, much less in warped information environments. This is not to say that people’s intentions are universally good, but that even sincere, well-meaning efforts do not necessarily have desirable outcomes. This speaks against proposing a greater individual responsibility for misinformation, because, if even the best intentions can be corrupted, then there isn’t a great chance of success.

[…]

Leaning away from individual responsibility means that the burden should be shifted to those who have structural control over our information environments. Solutions to our misinformation epidemic are effective when they are structural and address the problem at its roots. In the case of online misinformation, we should understand that technology giants aim at creating profit over creating public democratic goods. If disinformation can be made to be profitable, we should not expect those who profit to self-regulate and adopt a responsibility toward information by default. Placing accountability and responsibility on technology companies but also on government, regulatory bodies, traditional media and political parties by democratic means is a good first step to foster information environments that encourage good knowledge practices. This step provides a realistic distribution of both causal and effective remedial responsibility for our misinformation problem without nihilistically throwing out the entire concept of responsibility – which we should never do.

Source: On the moral responsibility to be an informed citizen | Psyche Ideas

Living forever

The interesting thing about this article is the predictions from forecasters on the website Metaculus. There’s wisdom in crowds, and particularly those who have interest/expertise in a given area.

There are a million philosophical questions about ‘living forever’ or just humans living for a lot longer than they do now. This article, however, just focuses on the four most promising ways, and their likelihood over different timescales.

We’re either the last generation to ever die, or the first generation to live forever.

I’m not talking figuratively here. You, reading this, might have an eternal life.

Source: The 4 Ways You Might Live Forever | Tomas Pueyo

Who knew tapping a checkbox could be so satisfying?

I love it when people who are great at what they do, and who sweat the details, share their processes. It’s well worth looking at the lengths this designer has gone to in order to make tapping a simple checkbox feel like an achievement. Visuals, sound, haptics, the lot!

These things matter. One of the reasons I like Trello so much, for example, is the confetti that emanates from the card when you drag it to ‘done’.

If we can add Feel to the humble checkbox, imagine what it could do for apps that aid in personal connections or creativity. Many of us make the mistake in thinking of the apps we design as public spaces—drawing inspiration from the rationality of airport signage or the deference of an art gallery. We completely forget that these experiences are also incredibly personal. And while a clean, white gallery space may be beautiful in its minimalism, it’s not the comforting place most would want to live.

Design can be reductive and rational. But it can also add richness to our lives.

Maximize that.

And use every tool you can get your hands on.

Source: The World’s Most Satisfying Checkbox | (Not Boring) Software

Subscriber count as power level against algorithmic demons

I’ve done a lot of writing for work this week and needed to hear some of the things in this post by Justin Murphy. Great stuff.

Mustering the discipline to write on a regular basis is a battle against yourself, against your own feeling that it doesn't matter.

Finding the will to click the publish button is a battle against yourself, against your own feeling that it’s not worth it.

You feel nervous about what your readers will think, but that makes no sense. They subscribed to you because they want to know what you think; you have zero reason to care what they think. If you really care what your readers think, then go subscribe to them. You are not subscribed to your readers because you do not care what they think. Now act like it.

Source: Writing is a Single-Player Game | Other Life

Artificial metrics are flying by instrument

We had a conversation earlier this week about how we’re going to measure the progress of some community work we’re doing. In the end, we decided that there were no metrics that would make sense. It’s a vibe.

This post says much the same thing. Sometimes there are no  objective measurements for things that matter. And that’s OK.

Flight deck controls

Artificial metrics are flying by instrument. They're individual "better/worse" dials that in amalgamation are supposed to tell you which way things are going, as long you are paying attention to the correct combination of them at the correct moment, and don't over-react to the feedback loops and crash the whole thing via a PIO. Instrument-only flight is harder than visual flight, it takes extensive practice, and the mistakes have worse repercussions.

You can instead choose to just fly visually. It’s easier, it’s safer, and it’ll get you where you’re trying to go. The thing is, your entire industry thinks it’s impossible, and worse, they think it is irresponsible. They’re kinda right. You have to be good at the innate skill of flying, instead of the skill of navigating by instrument. Guess which one the “become a manager in tech” system produces. Bonus points: recognize how that is itself a PIO.

Bonus Bonus Bonus points: Consider that if you’ve learned the skillset of visual flight poorly, and you don’t use the instruments to correct yourself, how will you ever know it’s going wrong in time?

[…]

What matters for your team/org’s success is the fundamental human relationships, comradery, esprit de corps, support and space-curation, and especially, all of the prior while treating-em-like-adults. Those things make up the totality of why people want to work on your team and are excited about working with and supporting their peers. These are not invisible things. These are things you can pay attention to, structurally. These are not things you can quantify with numbers. You’re going to have to get comfortable with forming, expressing, and defending opinions based on things besides “data.” Not because you don’t have data, but because you don’t have quantifiable numbers that represent themselves, and our industry is poisoned into believing that only such things are data. We’ve got thousands of years of evolution helping us understand how group dynamics are flowing. Yes, using that is a skill set. That’s my point. Build and use that skill set. Learn how to read people’s reactions. Learn how to understand people’s motivations. Learn how to see how people work in groups and as individuals. Do the work.

Source: How to build orgs that achieve your goals, by absolutely never doing that | Graham says wrong things

Image: Jp Valery

Audrey Watters says goodbye to EdTech

Sadly, EdTech, the field that I used to feel part of, is never going to change, so this post from Audrey Watters was sadly inevitable. Anything that can be commodified will be commodified, it would seem.

Thanks Audrey, you’re awesome. I hope you find solace and energy in what you decide to do next.

I probably do have a wee bit more to say about ed-tech — the "good riddance" part — but I don't feel like posting it on Hack Education. I'll write about it here — therapeutically, I reckon. But I don't really want to continue to churn out criticism of the field/industry/discipline. Sufficed to say: folks will bend over backwards to justify the most fucked-up tools and the most oppressive educational practices and technologies. Some folks will say yes, the technology is bad — if we just had better technology then everything'd be okay. Others will say that it's our educational practices that suck — if we just had better pedagogies, then everything technological would fall into place. Both camps still insist that the future is "digital," and as such, are trapped in a story that will never get them to "better" because the foundations will always be rotten. And so few people in ed-tech, so fixated on their fantasies about the future, want to talk about that.
Source: Goodbye Ed-tech, and Good Riddance | Audrey Watters

Getting out of a rut

I didn’t send out a Thought Shrapnel newsletter at the end of May as I’d hardly posted here during the month. There was no particular reason I could fathom for this. I guess I just got stuck in a rut of not-writing-here.

As David Cain points out in this post, ruts are often of our own creation and happen when we relate to a ‘dip’ in mood, luck, or progress. Happily,  I’m back posting here and I’m in the opposite of a rut when it comes to exercise!

Ruts can be years long – that near-decade in which you didn’t touch the piano at all — or just a few days – you ordered out Tuesday instead of cooking, did it again Wednesday, and then again Thursday. Whatever the duration, ruts are temporary dips in our apparent ability to do a thing that’s important to us.

What I’ve noticed about my ruts is that they are mostly my own creation. Something external precipitates them, and something internal sustains them. Bad luck and bad weather are unavoidable, but a long rut can begin, and persist, even when the bad weather itself only lasted a day.

My theory is that ruts are what happen when you experience a dip – in mood, in luck, in progress – and you respond to it in a certain very human way: by doing something that makes you more prone to such dips. A simple example is the common sleep-caffeine spiral. You have a bad sleep for some reason (there was a party next door, or you saw a mouse in the cellar) and the next day you feel tired, and when you feel tired you sometimes have an afternoon coffee. This makes you more prone to more bad sleeps, which makes you more prone to afternoon coffees, and so on. You responded to the dip by doing something that creates more dips. All of this feels perfectly natural as it is happening.

Source: The Rut Principle | Raptitude

Yes, parenting matters

Parenting is the hardest job I have ever had. It never stops, and I seldom think I’m doing a good job at it.

That’s why it can be comforting to see ‘scientific studies’ indicate that it doesn’t really matter how you parent, in the long-run. The trouble is, as this article shows, that’s not actually true.

We can’t experimentally reassign children to different parents — we’re not monsters, and please don’t call to offer us your teenager — but sometimes real life does that anyway. Here’s an example: some Korean adoptees were assigned to American adopters by a queueing system which was essentially random. So there was no correlation between adoptees’ and parents’ genes. Yet, adoptees assigned to better educated families became significantly better educated themselves. Adopters made a difference in other ways too: for instance, mothers who drank were about 20% more likely to have an adoptive child who drank. This can’t be genetics. It must be something about the environment these parents provided. Other adoption studies reach similar conclusions.

More evidence comes from the grim events of death and divorce. If your parent dies while you are very young, you end up less like that parent, in terms of education, than otherwise. Again, that can’t be genetics. And children of parents who divorce become more like the parent they stay with. In other words, when parents spend time with their children, their behaviours and values rub off.

[…]

The bottom line is this: how much and what you say to your child from their first few days literally carves new paths in their brain. We know this from research on speech development. When mothers responded to their babies’ cues with the most basic vocalisations, they accelerated their children’s language development. So go ahead and babble along with your toddler.

Source: No wait stop it matters how you raise your kids | Wyclif’s Dust

EaaS : Employee as a Service

This is humorous, but also we should remind ourselves that bosses need workers, but workers don't need bosses 🤘

Interviewee explaining to interviewer that they have a 'variety of plans to meet your needs'. Things like overtime, personal number being available, and working with a smile are listed under 'Premium'.

Source: EaaS : ProgrammerHumor

'Slack' and work

I’m composing this having done about 19 paid hours of work this week. I’ve also contributed to Open Source projects, written here, done some housework, parenting, and various other things.

I don’t define myself by paid work. I can’t really even properly tell you what I ‘do’ for a ‘job’, to be honest.

According to Bertrand Russell, this is all well and good. As Andrew Curry notes in this post, we should be aiming for about 60% capacity at any given time. I usually end up averaging between 20 and 25 hours per week, so it looks like I’m doing OK…

Portrait of Bertrand Russell
One of the key parallels that’s useful to draw here is between the idea of working less and ‘slack’. Slack is a difficult concept to pin down, but can exist in forms from queueing theory to buffer states. Working fewer hours than the current default 40-hour week is probably what most people do already, and it is also probably likely to move our slack-meter to a more optimal level.

Running with significant slack is often more efficient than running systems at high capacity. If you’re mathematically minded, Erik Bern simulates this via some code in the queueing theory link above, but G Gordon Worley III…gives a simpler explanation:

If you work with distributed systems, by which I mean any system that must pass information between multiple, tightly integrated subsystems, there is a well understood concept of maximum sustainable load and we know that number to be roughly 60% of maximum possible load for all systems.

This property will hold for basically anything that looks sufficiently like a distributed system. Thus the “operate at 60% capacity” rule of thumb will maximize throughput in lots of scenarios: assembly lines, service-oriented architecture software, coordinated work within any organization, an individual’s work, and perhaps most surprisingly an individual’s mind-body.

“Slack” is a decent way of putting this, but we can be pretty precise and say you need ~40% slack to optimize throughput: more and you tip into being “lazy”, less and you become “overworked".

Allowing flexibility and time into our systems so that we can sit idle is not an admission of defeat, but instead has the potential to be optimal in many circumstances.

I’m not sure how many working hours a week the dogma “operate at 60% capacity” translates to, but Bertrand Russell thought it might be twenty.

Source:  10 June 2022. Work | Dystopia | Just Two Things

Art gallery mode

I love this post by David Cain so much. He talks about how every weekend during the summer he goes on a bike ride. Using an app to randomise his destination, he always finds something worth discovering. Why? Because he’s in what he calls ‘art gallery mode’.

To select a destination, I use an obscure app called Randonautica, which creates an X-marker somewhere on a map of the city. The app’s “About” section says it chooses this location through “theoretical mind-matter interaction paired with quantum entropy to test the strange entanglement of consciousness with observable reality.” It says the app’s users, when they arrive at their prescribed locations, often find “serendipitous experiences that seemingly align with their thoughts.”

[…]

The first time it sent me to a creekside clearing, where I saw a strange black glob in the water that turned out to be a mass of tadpoles. Another time it sent me to a gravel back lane near where I used to live, at a spot where someone had written “DAD!” on the fence in some kind of white resin. Another day it took me to a book-exchange box containing only children’s books and Stephen King’s Tommyknockers.

Wherever it sends you, there’s always something there that seems charged with a small amount of cosmic significance, even if it’s just a particularly charismatic patch of dappled sunlight, an abandoned shopping list with unusual items on it, or some other superordinary sight akin to the twirling plastic bag in American Beauty.

The trick here is that there’s always something significant, poignant, or poetic everywhere you look, if your mind is in that certain mode – so rare for adults — of just looking at what’s there, without reflexively evaluating or explaining the scene. A mystery co-ordinate in an unfamiliar neighborhood gives you few preconceptions about what you’re going to find there, so the mind naturally flips into this receptive, curious state that’s so natural for children.

I sometimes call this state “art gallery mode,” because of a trick I learned from an art history major. We were at the Metropolitan Museum of Art in New York, browsing famous abstract paintings by Pollock, Kandinsky, Mondrian, and other artists whose swirls, rectangles, and blobs are regarded as masterpieces.

Source: How to Get the Magic Back | Raptitude

The new digital divide  

We’re already at the stage where most people in the developed world have a device that can access the internet in their pocket. Many families have multiple devices in their house that can access the internet. We’re getting to the stage where that’s starting to be the case in developing countries.

So the new digital divide? How we use the internet. I think there’s a lot to unpack here, especially as we live in unequal societies dominated by hyper-capitalism. It would be easy to victim blame, but I know from experience that when I’m burned out, all I want to do is stare and scroll at my phone…

People using devices

In his seminar, Moro referred to the socio-economic divide based on how we use the internet as the second digital divide, in contrast to the original digital divide which was based on access to the internet.

Getting online in the 1990s required a personal computer and an account with a service provider, and e-commerce transactions required a credit card and bank account. As our economy was becoming increasingly digital, major new inequalities were now arising because so many around the world could neither afford a PC or an internet account and had no bank relationship or credit card. The reach and connectivity we were all so excited about in this initial phase of the internet era was in reality not so inclusive. While the internet was truly empowering for those with the means to use it, it led to a growing digital divide both within countries and across the world. The internet was ushering a global digital revolution, but it was disconcerting to have a global digital revolution that left out the majority of the world’s population.

This picture started to change in the 2000s. Continuing technology advances were now bringing the empowerment benefits of the digital revolution to a majority of the planet’s population. Mobile phones and wireless internet access went from a luxury to a necessity that most everyone could now afford, initially in advanced economies, and later in most of the rest of the world. We were transitioning from the connected economy of PCs, browsers and web servers to an increasingly hyperconnected digital economy of ubiquitous, powerful and inexpensive mobile devices, cloud-based apps, and broadband wireless networks.

While the original internet access gap is now minimal in developed economies, Moro and his collaborators found that a digital usage gap has now emerged, representing the distinct uses of the internet by different socio-economic groups based primarily on their income and educational status.

[…]

The study found quantitative evidence of a significant digital divide in internet usage between two socio-economic groups, each with different income and educational attainment. In principle, all individuals had access to the same internet. But, the study found that each group generally accessed its own distinct version of the internet, and their socio-economic behavior was thus influenced by the fairly different services and information that they were exposed to. By analyzing mobile traffic flows, the study identified the key services that each group accessed:

  • Higher income & education demographics - Information-seeking traffic predominates, e.g., news, mail, search; Instagram, WhatsApp and Twitter are the dominant social media apps; games like Clash of Clans are the most widely used, …
  • Lower income & education demographics - Entertainment traffic predominates, e.g., video-streaming, gaming, adult services; Facebook and Snapchat are the dominant social media apps; games like Candy Crush are the most widely used, …
“The digital usage gap is so profound between low- and high-income or low- or high-education areas that it can be used to clearly distinguish between them or even identify the relative composition of these groups in a given area,” wrote the authors. “High-income areas or those with higher education attainability show a more pronounced utilization of mobile devices to consume news, exchange e-mails, search for information or listen to music. At the same time, they display a reduced use of some social media platforms or video-streaming services.
Source: The Digital Divide in How We Use the Internet | Irving Wladawsky-Berger

Image source: Robin Worrall

Optimising for feelings, ceding control to the individual

It would be easy to dismiss this as the musings of a small company before they get to scale. However, what I like about it is that the three things they suggest for software developers (look inward, look away from your screen, cede control to the individual) actually constitute very good advice.

So, if not numbers, what might we optimize for when crafting software?

If we’ve learned anything, it’s that all numerical metrics will be gamed, and that by default these numbers lack soul. After all, a life well-lived means something a little different to almost everyone. So it seems a little funny that the software we use almost every waking hour has the same predetermined goals for all of us in mind.

In the end, we decided that we didn’t want to optimize for numbers at all. We wanted to optimize for feelings.

While this may seem idealistic at best or naive at worst, the truth is that we already know how to do this. The most profound craftsmanship in our world across art, design, and media has long revolved around feelings.

[…]

You see — if software is to have soul, it must feel more like the world around it. Which is the biggest clue of all that feeling is what’s missing from today’s software. Because the value of the tools, objects, and artworks that we as humans have surrounded ourselves with for thousands of years goes so far beyond their functionality. In many ways, their primary value might often come from how they make us feel by triggering a memory, helping us carry on a tradition, stimulating our senses, or just creating a moment of peace.

Source: Optimizing For Feelings | The Browser Company

Good ideas become colonised and domesticated

I’ve got this thought about how every good idea becomes colonised and domesticated. While domestication can be a good thing, because it potentially makes it more accessible to all, it also robs the idea of its radical, transformatory power.

Colonisation, however, is never a positive thing. It’s about renegotiating existing relationships, often through the lens of power, capital, and hegemonic power.

How related the above two paragraphs are to this article in The New Yorker is questionable. But, to me, it’s related. Centralised social media is colonised and domesticated.

Laptop with goo coming out

Once upon a time, the Internet was predicated on user-generated content. The hope was that ordinary people would take advantage of the Web’s low barrier for publishing to post great things, motivated simply by the joy of open communication. We know now that it didn’t quite pan out that way. User-generated GeoCities pages or blogs gave way to monetized content. Google made the Internet more easily searchable, but, in the early two-thousands, it also began selling ads and allowed other Web sites to easily incorporate its advertising modules. That business model is still what most of the Internet relies on today. Revenue comes not necessarily from the value of content itself but from its ability to attract attention, to get eyeballs on ads, which are most often bought and sold through corporations like Google and Facebook. The rise of social networks in the twenty-tens made this model only more dominant. Our digital posting became concentrated on a few all-encompassing platforms, which relied increasingly on algorithmic feeds. The result for users was more exposure but a loss of agency. We generated content for free, and then Facebook mined it for profit.

“Clickbait” has long been the term for misleading, shallow online articles that exist only to sell ads. But on today’s Internet the term could describe content across every field, from the unmarked ads on an influencer’s Instagram page to pseudonymous pop music designed to game the Spotify algorithm. Eichhorn uses the potent term “content capital”—a riff on Pierre Bourdieu’s “cultural capital”—to describe the way in which a fluency in posting online can determine the success, or even the existence, of an artist’s work. Where “cultural capital” describes how particular tastes and reference points confer status, “content capital” connotes an aptitude for creating the kind of ancillary content that the Internet feeds upon. Since so much audience attention is funnelled through social media, the most direct path to success is to cultivate a large digital following. “Cultural producers who, in the past, may have focused on writing books or producing films or making art must now also spend considerable time producing (or paying someone else to produce) content about themselves and their work,” Eichhorn writes. Pop stars log their daily routines on TikTok. Journalists spout banal opinions on Twitter. The best-selling Instapoet Rupi Kaur posts reels and photos of her typewritten poems. All are trapped by the daily pressure to produce ancillary content—memes, selfies, shitposts—to fill an endless void.

Source: How the Internet Turned Us Into Content Machines | The New Yorker

Testing a 4-day work week

I already work what most people would call ‘part-time’, doing no more than 25 paid hours of work per week, on average. I’m glad that employers are experimenting with a shorter workweek (for the same pay) but inevitably one of the metrics will be ‘productivity’ which I think is a ridiculously difficult thing to actually measure…

“After the pandemic, people want a work-life balance,” Joe Ryle, the campaign director for the 4 Day Week Campaign, said in an interview. “They want to be working less.”

More than 3,300 workers in banks, marketing, health care, financial services, retail, hospitality and other industries in Britain are taking part in the pilot, the organizers said. Mr. Ryle said the data would be collected through interviews and staff surveys, and through the measures each company uses to assess its productivity.

“We’ll be analyzing how employees respond to having an extra day off, in terms of stress and burnout, job and life satisfaction, health, sleep, energy use, travel and many other aspects of life,” Juliet Schor, a sociology professor at Boston College and the lead researcher on the project, said.

Source: Britain Tests a 4-Day Workweek | The New York Times

Coffee and its impact on fitness

It’s good to read this, which is a side product of the excellent Just One Thing podcast.

I currently drink a couple of cups of coffee per (working) day — one at around 10:00 and the other at about 14:30. Given I often exercise in the middle of the day, this actually works out pretty well!

Studies have shown that coffee improves almost every aspect of sports performance, whether it's strength, explosive speed, endurance or skill.. Dr James Betts, Professor of Metabolic Physiology at the University of Bath, says: “I would put caffeine on top of the list of supplements that boost physical performance – both for the size of effect that you get, and the breadth.”

One way that coffee boosts performance is by blocking the action of adenosine. Adenosine is a chemical messenger in your brain which makes you feel tired. Caffeine blocks the action of adenosine, helping you go on for longer without getting tired. “You could do worse than imagining adenosine to be like a brake that’s going to slow down your neural activity. Caffeine essentially hits the same receptors to prevent sleepiness,” explains Prof Betts.

Another way coffee works to boost exercise performance is by raising your levels of adrenaline, which can reduce pain and delay fatigue. It could also have effects on your fat and muscles.

Source: Can coffee make you fitter? | BBC Radio 4 - Just One Thing

Billable hours and the psychology of work

I have to say that tracking my time is the worst thing about consulting rather than being employed. I don’t feel the urge to work at all hours of the day, but I resent ‘accounting’ financially for my time.

It is tempting to offer some typology of different professions and their attitudes to time. Yet I suspect the types are beginning to blur. In 1992, the economist Peter Sassone coined the phrase “the law of diminishing specialisation”. Thirty years later, it is astonishing how much knowledge work is handled using the same tools and workflow — a workflow that increasingly involves no fixed hours and no fixed location. We are all, like the lawyers, able to do a little bit of extra work before bedtime, even if not all of us can charge £1,000 an hour for it.

And while the “billable hour” can be a psychological trap, it does teach us one valuable lesson: there is a distinction between working and not working. It’s a distinction worth sustaining.

Source: The billable hour is a trap into which more and more of us are falling | Tim Harford

Signalling that you're AFK in a world where you can never really be AFK

*AFK = ‘Away From Keyboard’

I used AIM and MSN Messenger as a teenager, from around 1996 to about 2001. It was great, and I remember messaging with friends and the woman who is now my wife using it.

Part of the whole experience of it was that you were using the service on a shared device, a computer that the rest of the family would use. In that sense, it was more like a text-based landline phone. It wasn’t personal like the smart devices that live in our pockets these days.

There a lot of nostalgia about how things used to be, and we’re certainly not going back to shared devices as a primary means of getting online anytime soon. So that means that we need other ways of respecting one another’s boundaries. This is something we can actually reclaim ourselves by responding to messages on our own terms.

Sometimes you had to step away. So you threw up an Away Message: I’m not here. I’m in class/at the game/my dad needs to use the comp. I’ve left you with an emo quote that demonstrates how deep I am. Or, here’s a song lyric that signals I am so over you. Never mind that my Away Message is aimed at you.

I miss Away Messages. This nostalgia is layered in abstraction; I probably miss the newness of the internet of the 1990s, and I also miss just being … away. But this is about Away Messages themselves—the bits of code that constructed Maginot Lines around our availability. An Away Message was a text box full of possibilities, a mini-MySpace profile or a Facebook status update years before either existed. It was also a boundary: An Away Message not only popped up as a response after someone IM’d you, it was wholly visible to that person b they IM’d you.

Nothing like this exists in our modern messaging apps.

[…]

People send too many messages. I send too many messages. The first step in making messaging amends is to admit that you, too, are an inconsiderate messaging maniac.

But I’ll never stop, and neither will you. Quick messaging is a utility. It is, in many cases, the most efficient and meaningful form of communication we have. It’s crucial for relationship building, for organizing, for supporting others through hard times. It can be joyful.

[…]

Would something like the Away Message, a relic from an era when we just didn’t message so darn much, actually put up the guardrails we need? Maybe not. But I’m willing to try anything at this point. If we can’t ever get away from messages, at the very least we can create a digital simulacrum of ourselves that appears to be away. What else is the internet for?

Source: It's Time to Bring Back the AIM Away Message | WIRED

The mesmerising murmurations of Europe’s starlings

Incredible. I highly recommend clicking through to watch the videos!

A murmuration of starlings

How the birds move together in such close proximity, as though one organism, is another mystery. One study found that each starling was responding instantly to the six or seven birds closest to it to maintain group cohesion.
Source: ‘A fragment of eternity’: the mesmerising murmurations of Europe’s starlings | The Guardian

WIRED magazine predicts the 21st century... in 1997

This is from WIRED magazine in 1997 where authors Peter Schwartz and Peter Leyden suggest ten scenarios that could play out in the 21st century. On the one hand, this feels eerily prescient given our current world. On the other hand, perhaps the writing has been on the wall for quite a while.

We’re recording an episode of the Tao of WAO podcast tomorrow with futurist Bryan Alexander, who pretty much predicted the pandemic in his book Academia Next. I wonder what his thoughts are on this?

Ten scenario spoilers include tensions between China and US, new technologies "turn out to be a bust", Russia devolves into a kleptocracy, Europe's integration grinds to a halt, major ecological crises affect food supplies, major rise in crime and terrorism, rise in pollution increases prevalence of cancer, energy prices go through the roof, an uncontrollable plague hits, human progress halts because of a social and cultural backlash

Source: In 1997, Wired Magazine Predicts 10 Things That Could Go Wrong in the 21st Century: “An Uncontrollable Plague,” Climate Crisis, Russia Becomes a Kleptocracy & More | Open Culture

Updating our worldviews

I’m reading a book which deals with the Protestant Reformation at the moment. I think for anyone who knows some history, there have been times which have truly been unprecedented; things have changed so quickly that people haven’t been able to keep up.

We’re living during a slow, but accelerating, car crash. We do need to update our mental models, for sure. But collectively, and most important at levels which are going to have an impact. Let’s not forget that just 100 companies are responsible for 71% of global emissions.

Those of us with active minds are constantly gardening our worldviews. We adjust our perspectives as events around us unfold, as age and experience inform our received wisdom, as we learn new facts — and as cultural change around us pushes us to think differently. Even in extremely stable and slow-changing societies, there are always some people doing this gardening

But this is not a stable society, and today gardening is not enough.

We grew up in societies built upon certain assumptions about how the world works, and how the planet around us should be seen. We now know those assumptions were wrong in profound ways, and in one human lifetime we have altered the climate and biosphere, squandered vast natural riches and destabilized a myriad of systems we depend on. We have made the circumstances of our lives discontinuous with everything that came before us. The societies we live in are now catastrophically unsuited for the planet we’ve made. Yet we still see the planet around us with worldviews formed inside of those societies.

[…]

Seeing with fresh eyes is something we can learn to do. It offers real advantages. At very least, an updated worldview means being able to stand in the surf and face the ocean, to see the waves rolling in, giving us a better shot at not getting plowed and dragged when the next sleeper wave suddenly surges up and hits us.

[…]

Right now, rebuilding our worldviews involves a lot of labor-intensive personal exploration. Being native to now demands finding insight, not just receiving it. It demands teaching ourselves how to learn new things, when both the course of our study and the lessons to be absorbed are complex and constantly evolving. This is a real challenge when we have such busy lives. A lot of people will decide to worry about it later.

[…]

The greatest danger in any work that asks you to think systemically about the future is getting locked into the worldview that made sense to you when you first began, that you built your successful career on.

We all have limited time and energy. Building up an insightful mental model of how the world works takes a lot of both. The pay-off is in the profit and sense of purpose gained from one’s expertise. It is very common, when you’re highly rewarded for a given set of working insights, to commit more to those insights as your career unfolds, to begin even to defend those insights from challenging new perspectives (ones you fear might devalue your intellectual stock in trade). This “sunk-cost expertise” can easily become a set of shackles.

[…]

All this is to say that the very process of worldview-building is undergoing an unprecedented shift. The planetary crisis is swallowing the world we thought we knew, whole, in one great gulp.

Source: Old thinking will break your brain. | Alex Steffen

Developing your own style (and archive)

I like the way that Warren Ellis works out loud. I’ve read some great books because of this, and learned a lot about developing your own style.

Warren Ellis' LTD logo

I no longer look at traffic stats. I know what it is. That’s not what this site is for. This is a space for achieving personal goals: I’m using it to get thoughts out in front of me where I can see them properly, and if you’re here with me reading over my shoulder, I’m happy with that.

[…]

[T]his place should be a repository of all the things that interest me and teach me, under the general rubric of storytelling, culture and knowledge work. That’s the focus. This is a tool. That means, among other things, that I need to get better at deep linking back into the archive of the site. This is one thing that social media trained us out of. If you’ve been around a while, tumblelogs kind of did that to us too.

[…]

Modifier: “evolving the tools” becomes its own rabbit hole. Just learn the habit of putting stuff where you can fucking find it later, Warren.

Source: LTD Development | WARREN ELLIS LTD

Should governments track supermarket purchases?

We booked a holiday to France this week and used Tesco vouchers to pay for the Eurotunnel crossing. These Tesco vouchers are a kind of payment-in-kind for the data they gather (and presumably sell) about our grocery purchases.

I use both Google Pay and Garmin Pay so that I don’t have to take a wallet with me everywhere. It’s convenient, but these two tech companies — as well as my bank — know a lot about my purchasing habits.

So, from that point of view, it seems odd to wring our hands about the State knowing more about grocery purchases. But the point, I guess, is that in this case there’s no way to escape it, no opt-out.

Statistics Norway (SSB) is the state-owned entity responsible for collecting, producing and communicating statistics related to the economy, population and society at national, regional and local levels.

Because everything about an individual living in Norway is linked to their fødselnummer (birth number), SSB already knows where you live, what you earn and what’s on your criminal record.

However, according to a report by NRK, they now want to know where you shop, and what you buy.

SSB has ordered Norway’s major supermarket chains NorgesGruppen, Coop, Bunnpris and Rema 1000 to share all their receipt data with the agency. Nets, the payment processor that is responsible for 80% of transactions, will also need to provide data.

[…]

SSB claims they want a less time-consuming way of collecting and analysing household consumption statistics in order to inform tax policy, social assistance and child allowance.

[…]

SSB is adamant that they are only concerned with statistics at a group level: “When the purchases are linked to a household, it will be possible in the consumption statistics to analyze socio-economic and regional differences in consumption, and link it to variables such as income, education and place of residence.”

Source: Norway to Track All Supermarket Purchases | Life in Norway

Should governments track supermarket purchases?

We booked a holiday to France this week and used Tesco vouchers to pay for the Eurotunnel crossing. These Tesco vouchers are a kind of payment-in-kind for the data they gather (and presumably sell) about our grocery purchases.

I use both Google Pay and Garmin Pay so that I don’t have to take a wallet with me everywhere. It’s convenient, but these two tech companies — as well as my bank — know a lot about my purchasing habits.

So, from that point of view, it seems odd to wring our hands about the State knowing more about grocery purchases. But the point, I guess, is that in this case there’s no way to escape it, no opt-out.

Statistics Norway (SSB) is the state-owned entity responsible for collecting, producing and communicating statistics related to the economy, population and society at national, regional and local levels.

Because everything about an individual living in Norway is linked to their fødselnummer (birth number), SSB already knows where you live, what you earn and what’s on your criminal record.

However, according to a report by NRK, they now want to know where you shop, and what you buy.

SSB has ordered Norway’s major supermarket chains NorgesGruppen, Coop, Bunnpris and Rema 1000 to share all their receipt data with the agency. Nets, the payment processor that is responsible for 80% of transactions, will also need to provide data.

[…]

SSB claims they want a less time-consuming way of collecting and analysing household consumption statistics in order to inform tax policy, social assistance and child allowance.

[…]

SSB is adamant that they are only concerned with statistics at a group level: “When the purchases are linked to a household, it will be possible in the consumption statistics to analyze socio-economic and regional differences in consumption, and link it to variables such as income, education and place of residence.”

Source: Norway to Track All Supermarket Purchases | Life in Norway

Distro-hopping like a cynic

My own Linux journey has gone from Red Hat Linux, to Ubuntu, to Pop!_OS. However, today I’ve been messing about with Fedora Silverblue. I’m actually typing this on ChromeOS, which of course is also Linux.

What I like about The Register is their snarky, sarcastic style, which they put to good use in this article.

It is a truth universally acknowledged that all operating systems suck. Some just suck less than others.

It is also a comment under pretty much every Reg article on Linux that there are too many to choose from and that it’s impossible to know which one to try. So we thought we’d simplify things for you by listing how and in which ways the different options suck.

Source: The cynic’s guide to desktop Linux | The Register

Distro-hopping like a cynic

My own Linux journey has gone from Red Hat Linux, to Ubuntu, to Pop!_OS. However, today I’ve been messing about with Fedora Silverblue. I’m actually typing this on ChromeOS, which of course is also Linux.

What I like about The Register is their snarky, sarcastic style, which they put to good use in this article.

It is a truth universally acknowledged that all operating systems suck. Some just suck less than others.

It is also a comment under pretty much every Reg article on Linux that there are too many to choose from and that it’s impossible to know which one to try. So we thought we’d simplify things for you by listing how and in which ways the different options suck.

Source: The cynic’s guide to desktop Linux | The Register

#AbolishTheMonarchy

It’s Jubilee weekend in the UK, not that I’m celebrating. Someone re-shared this classic article in The Irish Times from last year which expresses just the entire ridiculousness of venerating a talentless inbred family.

Harry Windsor and Meghan Markle

The contemporary royals have no real power. They serve entirely to enshrine classism in the British nonconstitution. They live in high luxury and low autonomy, cosplaying as their ancestors, and are the subject of constant psychosocial projection from people mourning the loss of empire. They’re basically a Rorschach test that the tabloids hold up in order to gauge what level of hysterical batshittery their readers are capable of at any moment in time.

Source: Harry and Meghan: The union of two great houses, the Windsors and the Celebrities, is complete | The Irish Times

Epic UK walking trails

After walking Hadrian’s Wall (84 miles, 72 hours) a couple of months ago, I’m now seriously considering walking The Pennine Way (268 miles) next year. I reckon it might take a couple of weeks, as I absolutely beasted myself to do Hadrian’s Wall so quickly.

High Force waterfall

If you can’t spare a week to walk Britain’s mountain trails, coastal tracks and riverside paths, head for these expert-picked highlights
Source: The best of the UK’s most epic walking trails in 2-3 days | The Guardian

Epic UK walking trails

After walking Hadrian’s Wall (84 miles, 72 hours) a couple of months ago, I’m now seriously considering walking The Pennine Way (268 miles) next year. I reckon it might take a couple of weeks, as I absolutely beasted myself to do Hadrian’s Wall so quickly.

High Force waterfall

If you can’t spare a week to walk Britain’s mountain trails, coastal tracks and riverside paths, head for these expert-picked highlights
Source: The best of the UK’s most epic walking trails in 2-3 days | The Guardian

Space of possibilities

In Andrew Curry’s latest missive, his ‘two things’ are Climate and Business. The diagram below is actually from the latter section, but I think it’s actually also very relevant for the former.

At his Roblog blog, Rob Miller has a short and engaging post on why businesses fail over time. I’m not sure it’s right, but it’s certainly interesting, and he tells the story through three diagrams.

He argues—following the work of Jens Rasmussen—that successful businesses operate in a safe space that sits between economic failure, on one side, lack of safety, on another, and overload, on a third.

Source: 4 May 2022. Climate | Business - Just Two Things

Popular culture has become an endless parade of sequels

Once you start recognising colour schemes and sound effects, every new film ends up looking and sounding the same.

Yes, I’m getting old, but as Adam Mastroianni from Experimental History explains, there’s shifts happening in everything from books to video games.

The problem isn’t that the mean has decreased. It’s that the variance has shrunk. Movies, TV, music, books, and video games should expand our consciousness, jumpstart our imaginations, and introduce us to new worlds and stories and feelings. They should alienate us sometimes, or make us mad, or make us think. But they can’t do any of that if they only feed us sequels and spinoffs. It’s like eating macaroni and cheese every single night forever: it may be comfortable, but eventually you’re going to get scurvy.

[…]

Fortunately, there’s a cure for our cultural anemia. While the top of the charts has been oligopolized, the bottom remains a vibrant anarchy. There are weird books and funky movies and bangers from across the sea. Two of the most interesting video games of the past decade put you in the role of an immigration officer and an insurance claims adjuster. Every strange thing, wonderful and terrible, is available to you, but they’ll die out if you don’t nourish them with your attention. Finding them takes some foraging and digging, and then you’ll have to stomach some very odd, unfamiliar flavors. That’s good. Learning to like unfamiliar things is one of the noblest human pursuits; it builds our empathy for unfamiliar people. And it kindles that delicate, precious fire inside us––without it, we might as well be algorithms. Humankind does not live on bread alone, nor can our spirits long survive on a diet of reruns.

Source: Pop Culture Has Become an Oligopoly | Experimental History

The Climate Game

The Financial Times has a free-to-play game where the aim is to try and keep global warming to only 1.5°C by the year 2100. There are different things to choose from and decisions to make.

I only managed to keep it to 1.88°C and still had to make some pretty drastic decisions. We’re utterly screwed. We need to act on the climate crisis, but also start adapting too.

(also worth looking at the article on how they made this)

See if you can save the planet from the worst effects of climate change
Source: The Climate Game — Can you reach net zero? | Financial Times

14 Common Features of Fascism

This is from six years ago but it’s worth revisiting as I don’t think it’s too much of a push to see these elements at play in the USA right now. Which, if you think about the role that country played up until recently, is staggering.

Fascism and authoritarianism change over time, however, and so it’s worth also listening to a recent episode of the BBC Radio 4 Thinking Allowed programme on ‘Strongmen’. For example, instead of banning elections, fascists/authoritarian allow the democratic veneer to remain, they just ensure that the result goes they way they want.

While Eco is firm in claiming “There was only one Nazism,” he says, “the fascist game can be played in many forms, and the name of the game does not change.” Eco reduces the qualities of what he calls “Ur-Fascism, or Eternal Fascism” down to 14 “typical” features. “These features,” writes the novelist and semiotician, “cannot be organized into a system; many of them contradict each other, and are also typical of other kinds of despotism or fanaticism. But it is enough that one of them be present to allow fascism to coagulate around it.”
  1. The cult of tradition. “One has only to look at the syllabus of every fascist movement to find the major traditionalist thinkers. The Nazi gnosis was nourished by traditionalist, syncretistic, occult elements.”
  2. The rejection of modernism. “The Enlightenment, the Age of Reason, is seen as the beginning of modern depravity. In this sense Ur-Fascism can be defined as irrationalism.”
  3. The cult of action for action’s sake. “Action being beautiful in itself, it must be taken before, or without, any previous reflection. Thinking is a form of emasculation.”
  4. Disagreement is treason. “The critical spirit makes distinctions, and to distinguish is a sign of modernism. In modern culture the scientific community praises disagreement as a way to improve knowledge.”
  5. Fear of difference. “The first appeal of a fascist or prematurely fascist movement is an appeal against the intruders. Thus Ur-Fascism is racist by definition.”
  6. Appeal to social frustration. “One of the most typical features of the historical fascism was the appeal to a frustrated middle class, a class suffering from an economic crisis or feelings of political humiliation, and frightened by the pressure of lower social groups.”
  7. The obsession with a plot. “Thus at the root of the Ur-Fascist psychology there is the obsession with a plot, possibly an international one. The followers must feel besieged.”
  8. The enemy is both strong and weak. “By a continuous shifting of rhetorical focus, the enemies are at the same time too strong and too weak.”
  9. Pacifism is trafficking with the enemy. “For Ur-Fascism there is no struggle for life but, rather, life is lived for struggle.”
  10. Contempt for the weak. “Elitism is a typical aspect of any reactionary ideology.”
  11. Everybody is educated to become a hero. “In Ur-Fascist ideology, heroism is the norm. This cult of heroism is strictly linked with the cult of death.”
  12. Machismo and weaponry. “Machismo implies both disdain for women and intolerance and condemnation of nonstandard sexual habits, from chastity to homosexuality.”
  13. Selective populism. “There is in our future a TV or Internet populism, in which the emotional response of a selected group of citizens can be presented and accepted as the Voice of the People.”
  14. Ur-Fascism speaks Newspeak. “All the Nazi or Fascist schoolbooks made use of an impoverished vocabulary, and an elementary syntax, in order to limit the instruments for complex and critical reasoning.”
Source: Umberto Eco Makes a List of the 14 Common Features of Fascism | Open Culture

Are we really calling it #Elongate?

There’s been a noticeable influx of people to the Fediverse over the last few days due to Elon Musk’s acquisition of Twitter.

What I find really interesting are three things:

  1. Those arriving inevitably compare five year-old, federated, open-source software developed mainly by two people with a fifteen year-old publicly-traded company. The fact that they're even comparable is frankly amazing, if you think about the money poured into Twitter over the years.
  2. Some people already on the Fediverse seem to think they have to act differently and/or take time to explain all of the things to people arriving from Twitter. I'm not sure that's necessary. People learn by watching, imitating, and practising.
  3. There's plenty of people (including me, I guess, to some extent) who are keen to point out that they've been around on the Fediverse for quite a while, thank you very much.
There are, of course, many more compatible federated social networks than just Mastodon. Check out fediverse.party!

“Funnily enough one of the reasons I started looking into the decentralized social media space in 2016, which ultimately led me to go on to create Mastodon, were rumours that Twitter, the platform I’d been a daily user of for years at that point, might get sold to another controversial billionaire,” he wrote. “Among, of course, other reasons such as all the terrible product decisions Twitter had been making at that time. And now, it has finally come to pass, and for the same reasons masses of people are coming to Mastodon.”
Source: After Musk's Twitter takeover, an open-source alternative is 'exploding' | Engadget

Dedicated portable digital media players and central listening devices

I listen to music. A lot. In fact, I’m listening while I write this (Busker Flow by Kofi Stone). This absolutely rinses my phone battery unless it’s plugged in, or if I’m playing via one of the smart speakers in every room of our house.

I’ve considered buying a dedicated digital media player, specifically one of the Sony Walkman series. But even the reasonably-priced ones are almost the cost of a smartphone and, well, I carry my phone everywhere.

It’s interesting, therefore, to see Warren Ellis' newsletter shoutout being responded to by Marc Weidenbaum. It seems they both have dedicated ‘music’ screens on their smartphones. Personally, I use an Android launcher that makes that impracticle. Also, I tend to switch between only four apps: Spotify (I’ve been a paid subscriber for 13 years now), Auxio (for MP3s), BBC Sounds (for radio/podcasts), and AntennaPod (for other podcasts). I don’t use ‘widgets’ other than the player in the notifications bar, if that counts.

Source: Central Listening Device | Disquiet

Highlights from 'The Internet Is Not What You Think It Is'

On my flight back from Croatia at the weekend, I managed to read the entirety of The Internet Is Not What You Think It Is: A History, A Philosophy, A Warning by Justin E.H. Smith. To be honest, the book itself is not what you think it is, as Sam Kriss notes in his (equally good) review.

I have a background in Philosophy which might have helped with this book, as it delves into the history of ideas quite a bit. Although he outlines four 'charges' against the internet, the main thesis that I understand Smith as postulating is that the internet, and in particular the culture around it, shouldn't be seen as a revolutionary break with what has gone before.

To my mind, Smith makes some good arguments, although he gets too bogged-down with Leibniz for my liking. But in general, I like the book and gave it 4.5 stars out of five on Literal.club. What follows are some of my favourite sections of the book, which I'd encourage you to read.

Book cover for 'The Internet Is Not What You Think It Is'

As the quotations I'm using are fairly lengthy, I'll introduce each one. In this first one, Smith talks about his phenomenological approach which focuses on actual usage of terms.

It seems reasonable terminologically to follow actual usage, and it seems conceptually justified to focus on the small corner of the internet that is phenomenologically most salient to human life, just as when we speak of “life on earth” we often have humans and animals foremost in mind, even though all the plant life on earth weighs over two hundred times more than all the animals combined, in terms of total biomass. Animals are a tiny sliver of life on earth, yet they are preeminently what we mean when we talk about life on earth; social media are a tiny sliver of the internet, yet they are what we mean when we speak of the internet, as they are where the life is on the internet. (Thus, “internet” serves as a sort of reverse synecdoche, the larger containing term standing for the smaller contained term. The reason for adopting this terminology is that it seems to agree with actual usage among current English speakers; on Twitter, for example, you will often see users declaring exasperatedly that their antagonists need to “get off the internet” and “touch grass.” Here, they don’t really mean the whole internet; they mean Twitter. (p.17)

The four charges that Smith makes are that the internet is addictive, that it shapes human life algorithmically, that there is no democratic oversight of social media, and that it works as a universal surveillance device.

The principal charges against the internet, deserving of our attention here, instead have to do with the ways in which it has limited our potential and our capacity for thriving, the ways in which it has distorted our nature and fettered us. Let us enumerate them. First, the internet is addictive and is thus incompatible with our freedom, conceived as the power to cultivate meaningful lives and future-oriented projects in which our long-term, higher-order desires guide our actions, rather than our short-term, first-order desires. Second, the internet runs on algorithms, and shapes human lives algorithmically, and human lives under the pressure of algorithms are not enhanced, but rather warped and impoverished. To the extent that we are made to conform to them, we experience a curtailment of our freedom. Third, there is little or no democratic oversight regarding how social media work, even though their function in society has developed into something far more like a public utility, such as running water, than like a typical private service, such as dry cleaning. Private companies have thus moved in to take care of basic functions necessary for civil society, but without assuming any real responsibility to society. This, too, is a diminution of the political freedom of citizens of democracy, understood as the power to contribute to decisions concerning our social life and collective well-being. What Michael Walzer said of socialism might be said of democracy too: that “what touches all should be decided by all.” And on this reckoning, the internet is aggressively undemocratic. Fourth, the internet is now a universal surveillance device, and for this reason as well it is incompatible with the preservation of our political freedom. (p.18-19)

Smith goes on to explain the impact of each of these and starts to talk about how the problems interact with one another.

This then is the first thing that is truly new about the present era: a new sort of exploitation, in which human beings are not only exploited in the use of their labor for extraction of natural resources; rather, their lives are themselves the resource, and they are exploited in its extraction.

[...]

This then is the second new problem of the internet era: the way in which the emerging extractive economy threatens our ability to use our mental faculty of attention in a way that is conducive to human thriving. Both the first and second problems are aggravated significantly with the rise of the mobile internet, and what Citton astutely labels “affective condensation.” Most of our passions and frustrations, personal bonds and enmities, responsibilities and addictions, are now concentrated into our digital screens, along with our mundane work and daily errands, our bill-paying and our income tax spreadsheets. It is not just that we have a device that is capable of doing several things, but that this device has largely swallowed up many of the things we used to do and transformed these things into various instances of that device’s universal imposition of itself: utility has crossed over into compulsoriness.

[...]

This then is the third feature of our current reality that constitutes a genuine break with the past: the condensation of so much of our lives into a single device, the passage of nearly all that we do through a single technological portal. This consolidation, of course, helps and intensifies the first two novelties of our era that we identified, namely, the extraction of attention from human subjects as a sort of natural resource, and the critical challenge this new extractive economy poses to our mental faculty of attention.

[...]

If we all find it difficult to distinguish between advertisement and not-advertisement, this is in part because, today, all is advertisement. Or, to put this somewhat more cautiously, there is no part of our most important technology products and services that is kept cordoned off as a safe space from the commercial interests of the companies that own them.

[...]

This then is the fourth genuine novelty of the present era: in the rise of an economy focused on extracting information from human beings, these human beings are increasingly perceived and understood as sets of data points; and eventually it is inevitable that this perception cycles back and becomes the self-perception of human subjects, so that those individuals will thrive most, or believe themselves to thrive most, in this new system who are able convincingly to present themselves not as subjects at all, but as attention-grabbing sets of data points. (p.24-28)

Smith uses the example of a partnership between Ancestry and Spotify to be able to 'play the music that fits with your heritage'. It was a cynical marketing ploy, but he uses it to illustrate a wider point about the role of algorithms in society. His point is a nuanced and important one about how we serve algorithms, rather than having them serve us.

We are not, yet, accustomed to seeing these different trends—the corporate opportunism of Ancestry and Spotify; the sinister right-wing populism of the aforementioned leaders; and the identitarian campaigns for cultural purity driven mostly by young self-styled “progressives” on social media—as inflections of the same broad historical phenomenon. But perhaps their commonality may become clearer when we consider all of them as symptoms of an underlying and much vaster historical shift: the shift to ubiquitous algorithmic management of society, which lends advantage to the expression of opinions unambigous enough (i.e., dogmatic or extremist enough) for AI to detect their meaning and to process them accordingly, and which also removes from the individual subject any deep existential imperative or moral duty to cultivate self-understanding, instead allowing the sort of vectors of identity that even AI can pick up and process to substitute for any real idea of who an individual is or might yet hope to be. (p.56)

In 2011 there was a lot written about how the internet, and social media in particular, was bringing about a new positive world order. There was talk of a 'deliberative democracy', but actually (Smith points out) that never materialised.

What we have in fact obtained in place of this is a farcical imitation of deliberation, in which algorithms are designed by the companies that provide the platforms for discussion in order to maximize engagement, a purpose that is self-evidently at odds with the goal of conflict resolution or consensus-building. Social media are in this respect engines of perpetual disagreement, which sharpen opposing views into stark dichotomies and preclude the possibility of either exploring partial common ground or finding agreement in a dialectical fashion in some higher-order synthesis of what at the first order appear as contradictory positions. (p.59-60)

Chapter 2 is the pivotal chapter, as Smith outlines what I consider to be his main thesis that historical human interactions pre-empted internet culture.

The internet is still not what you think it is.

For one thing, it is not nearly as newfangled as the previous chapter made it appear. It does not represent a radical rupture with everything that came before, either in human history or in the vastly longer history of nature that precedes the first appearance of our species. It is, rather, only the most recent permutation of a complex of behaviors that is as deeply rooted in who we are as a species as anything else we do: our storytelling, our fashions, our friendships; our evolution as beings that inhabit a universe dense with symbols. (p.64)

He continues some pages later on the same theme.

Anthropogenic alterations of the natural environment are often too subtle to detect, even when they profoundly transform it, as for example in efforts to distinguish controlled-burning events from naturally occurring fires in human prehistory, or perhaps in the particular quality of Amazonian biodiversity today. If we were not so attached to the idea that human creations are of an ontologically different character than everything else in nature—that, in other words, human creations are not really in nature at all, but extracted out of nature and then set apart from it—we might be in a better position to see human artifice, including both the mass-scale architecture of our cities and the fine and intricate assembly of our technologies, as a properly natural outgrowth of our species-specific activity. It is not that there are cities and smartphones wherever there are human beings, but cities and smartphones themselves are only the concretions of a certain kind of natural activity in which human beings have been engaging all along. (p.89)

As a philosopher, Smith draws on a rich history of ideas and can weave together quite the rich picture of how the internet fits in with that history.

I am not, here, going quite so far as to say that the internet proves the truth of the theory of the world soul as it descends from Greek antiquity to the present day. I am too responsible to say that. Rather, I will carefully venture, as I began to do in the previous chapters, to note that it will help us to understand the nature and significance of the internet to consider it as only the most recent chapter in a much longer, and much deeper, history. (p.130)

From here, there's a fascinating discussion of metaphor and what counts as 'simulation'. There's also a great section on AI. So I'd encourage you to read it!

The economics of blockchain-based gaming don't add up

Blake Robbins, who used to work on game design at Roblox, has written an in-depth post on why blockchain-based gaming will never take off.

TL;DR: not only is it likely to be a Ponzi scheme, it's just a really bad idea for basic economic reasons.

The policy trilemma

Narratives can be moulded, but unfortunately crypto gaming evangelists will not be able to change basic economics. The fact that the problem with the Mundell-Fleming trilemma and how crypto games fall on the wrong side of them from a pure game design perspective which ultimately prevent large developers from creating AAA games with open economies as well as ruining user experience is totally ignored by VCs who are funnelling absurd amounts of money into these projects makes me question if they actually believe in the narrative they’re pushing, or if they’re simply investing in token pre-sales and planning on dumping on unwitting retail bagholders.

For the record, I’m not a crypto hater or anything... [h]owever, I just don’t see the application of decentralised blockchains in gaming, there isn’t a need. Putting games on the blockchain will just result in really slow servers as everything would constantly have to be verified by a decentralised database. No one gamer has ever said: “I don’t trust Rockstar to store my data correctly which is why I won’t buy GTA V”. Building games for the sole purpose of “play to earn” or “play to own” means that players are no longer playing games for enjoyment, but rather the hope that they can monetise their holdings. Inevitably, this means that the quality of game experience will drop, as developers focus solely on how to turn every single aspect of a game into an NFT which can be traded. Collectible trading should be complementary, like in Roblox or Counter-Strike , it should not be the whole purpose of a game. You might as well scrap the game altogether, and just focus on making NFT collections like Bored Apes or Cryptopunks. Recreating games to have a similiar culture will not work out.

Source: Why crypto gaming is not the future | blakeir

Literally shitposting

I saw this mentioned in passing and thought it was unusual enough to share here. There's a metaphor in there somewhere...

Emperor Heinrich VI

In July 1184 Henry VI, King of Germany (later Holy Roman Emperor), held court at a Hoftag in Erfurt. On the morning of 26 July, the combined weight of the assembled nobles caused the wooden second story floor of the assembly building to collapse and most of them fell through into the latrine cesspit below the ground floor, where about 60 of them drowned in liquid excrement. This event is called Erfurter Latrinensturz (lit. 'Erfurt latrine fall') in several German sources.

Source: Erfurt latrine disaster | Wikipedia

Assume that your devices are compromised

I was in Catalonia in 2017 during the independence referendum. The way that people were treated when trying to exercise democratic power I still believe to be shameful.

These days, I run the most secure version of an open operating system on my mobile device that I can. And yet I still need to assume it's been compromised.

In Catalonia, more than sixty phones—owned by Catalan politicians, lawyers, and activists in Spain and across Europe—have been targeted using Pegasus. This is the largest forensically documented cluster of such attacks and infections on record. Among the victims are three members of the European Parliament, including Solé. Catalan politicians believe that the likely perpetrators of the hacking campaign are Spanish officials, and the Citizen Lab’s analysis suggests that the Spanish government has used Pegasus. A former NSO employee confirmed that the company has an account in Spain. (Government agencies did not respond to requests for comment.) The results of the Citizen Lab’s investigation are being disclosed for the first time in this article. I spoke with more than forty of the targeted individuals, and the conversations revealed an atmosphere of paranoia and mistrust. Solé said, “That kind of surveillance in democratic countries and democratic states—I mean, it’s unbelievable.”

[...]

[T]here is evidence that Pegasus is being used in at least forty-five countries, and it and similar tools have been purchased by law-enforcement agencies in the United States and across Europe. Cristin Flynn Goodwin, a Microsoft executive who has led the company’s efforts to fight spyware, told me, “The big, dirty secret is that governments are buying this stuff—not just authoritarian governments but all types of governments.”

[...]

The Citizen Lab’s researchers concluded that, on July 7, 2020, Pegasus was used to infect a device connected to the network at 10 Downing Street, the office of Boris Johnson, the Prime Minister of the United Kingdom. A government official confirmed to me that the network was compromised, without specifying the spyware used. “When we found the No. 10 case, my jaw dropped,” John Scott-Railton, a senior researcher at the Citizen Lab, recalled. “We suspect this included the exfiltration of data,” Bill Marczak, another senior researcher there, added. The official told me that the National Cyber Security Centre, a branch of British intelligence, tested several phones at Downing Street, including Johnson’s. It was difficult to conduct a thorough search of phones—“It’s a bloody hard job,” the official said—and the agency was unable to locate the infected device. The nature of any data that may have been taken was never determined.

Source: How Democracies Spy On Their Citizens | The New Yorker

What technology means in late capitalism

Anyone familiar with Guy Debord's Society of the Spectacle will appreciate this article by Jonathan Crary, author of the short but impressive 24/7 Capitalism.

Crary's argument is that our current status quo depends on a capital-fuelled extractive mikitary-industriiall complex that cannot be sustained. What comes next can't (isn't likely to look like) just a 'Green New Deal' version of it.

Any possible path to a survivable planet will be far more wrenching than most recognize or will openly admit. A crucial layer of the struggle for an equitable society in the years ahead is the creation of social and personal arrangements that abandon the dominance of the market and money over our lives together. This means rejecting our digital isolation, reclaiming time as lived time, rediscovering collective needs, and resisting mounting levels of barbarism, including the cruelty and hatred that emanate from online. Equally important is the task of humbly reconnecting with what remains of a world filled with other species and forms of life. There are innumerable ways in which this may occur and, although unheralded, groups and communities in all parts of the planet are moving ahead with some of these restorative endeavors.


However, many of those who understand the urgency of transitioning to some form of eco-socialism or no-growth post-capitalism carelessly presume that the internet and its current applications and services will somehow persist and function as usual in the future, alongside efforts for a habitable planet and for more egalitarian social arrangements. There is an anachronistic misconception that the internet could simply “change hands,” as if it were a mid-20th-century telecommunications utility, like Western Union or radio and TV stations, which would be put to different uses in a transformed political and economic situation.

But the notion that the internet could function independently of the catastrophic operations of global capitalism is one of the stupefying delusions of this moment. They are structurally interwoven, and the dissolution of capitalism, when it happens, will be the end of a market-driven world shaped by the networked technologies of the present.

Of course, there will be means of communication in a post-capitalist world, as there always have been in every society, but they will bear little resemblance to the financialized and militarized networks in which we are entangled today. The many digital devices and services we use now are made possible through unending exacerbation of economic inequality and the accelerated disfiguring of the earth’s biosphere by resource extraction and needless energy consumption.

Source: The Digital Age is Destroying Us | Literary Hub

Using DICE instead of RA(S)CI

I like what the RACI responsibility assignment matrix tries to do in clarifying roles and responsibilities. In practice, I tend to favour RASCI which adds a 'support' role.

But I also agree with this article by Clay Parker Jones which suggests an alternative.

RACI is vague, hard to use, and reinforces the "what the hell is happening here" status quo. DICE is specific, easy to use, and shines a bright light on dysfunction.

Source: Use DICE instead of RACI | cpj.fyi

The value of a liberal education

I have degrees in Philosophy, History, and Education. As such, I have received what most would call a ‘liberal education’.

These days, people don’t put as much store in a liberal education as they used to, which is a shame. In fact, many people don’t even know what it means. SMBC explains.

SMBC is a daily comic strip about life, philosophy, science, mathematics, and dirty jokes.
Source: Liberal Education | Saturday Morning Breakfast Cereal

'Live Forever' mode

My first response to this article was ‘why?’ My second was realising that this in no way is ‘living forever’. Utterly pointless.

The death of [Somnium Space CEO] Sychov’s father served as the inspiration for an idea that he would come to call “Live Forever” mode, a forthcoming feature in Somnium Space that allows people to have their movements and conversations stored as data, then duplicated as an avatar that moves, talks, and sounds just like you—and can continue to do so long after you have died. In Sychov’s dream, people will be able to talk to their dead loved one whenever they wish.

“Literally, if I die—and I have this data collected—people can come or my kids, they can come in, and they can have a conversation with my avatar, with my movements, with my voice,” he told me. “You will meet the person. And you would maybe for the first 10 minutes while talking to that person, you would not know that it’s actually AI. That’s the goal.”

[…]

But even with all the ethical preparation and experience the company can muster, there will be inevitable and justifiable ethical questions about allowing a version of a self to continue on in perpetuity. What if, for example, the children of a deceased Somnium Space user found it painful to know he was continuing on in some form in their metaverse?

Source: Metaverse Company to Offer Immortality Through ‘Live Forever’ Mode | Vice

The rise of first-party online tracking

In a startling example of the Matthew effect of accumulated advantage, the incumbent advertising giants are actually being strengthened by legislation aimed to curb their influence. Because, of course.

For years, digital businesses relied on what is known as “third party” tracking. Companies such as Facebook and Google deployed technology to trail people everywhere they went online. If someone scrolled through Instagram and then browsed an online shoe store, marketers could use that information to target footwear ads to that person and reap a sale.

[...]

Now tracking has shifted to what is known as “first party” tracking. With this method, people are not being trailed from app to app or site to site. But companies are still gathering information on what people are doing on their specific site or app, with users’ consent. This kind of tracking, which companies have practiced for years, is growing.

[...]

The rise of this tracking has implications for digital advertising, which has depended on user data to know where to aim promotions. It tilts the playing field toward large digital ecosystems such as Google, Snap, TikTok, Amazon and Pinterest, which have millions of their own users and have amassed information on them. Smaller brands have to turn to those platforms if they want to advertise to find new customers.

Source: How You’re Still Being Tracked on the Internet | The New York Times

It's time to accept that centralised social media won't change

A great blog post by Chris Trottier about actually doing something about the problems with centralised social media, by refusing to be a part of it any more.

As an aside, once you see the problem with capitalism mediating every human relationship and interest, you can’t un-see it. For example, I’m extremely hostile to advertising. I really can’t stand it these days.

Centralized social media won't change. No regulatory bodies are coming to the rescue. If you hang around Twitter or Facebook long enough, no benevolent CEO will sprinkle magic pixie dust to make it better.

Acceptance is no small thing. If you’ve spent years on a social network, investing in relationships, it’s hard to accept that all that effort was a waste. I’m not talking about the people you build friendships with, but the companies and services that connect you. Twitter and Facebook are the nuclear ooze of the Internet, and nothing’s going to make them better.

It’s time to let go. Toxic social media doesn’t care about you, it just wants to exploit you. To them, you’re inventory, a blip in a database.

[…]

Getting rid of toxic social media is about building a future without it. There’s thousands of developers working on an open web, all who are dedicated to building a better Internet. Still, if we want those walled gardens to be dismantled, we must let developers know it’s worth while to code an alternative.

Thus, it's time to accept centralized social media for what it is: it is toxic and won't change. Once you accept this, vote with your feet. Then vote with your wallet.
Source: What should we do about toxic social media? | Peerverse

The triple-peak work day is a worrying trend

When I first stepped into the world of consulting, I spent around 18 months working with a large organisation. The person I reported to in the organisation did all of his real work in the evenings, because his 9-5 day was completely full of meetings.

Talking in meetings isn’t work. I’ve never thought so, and never will.

Last week, Microsoft published a study that offers an eerie reflection of my working life. Traditionally, the researchers said, white-collar workers—or “knowledge workers,” in the modern parlance—have had two productivity peaks in their workday: just before lunch and just after lunch. But since the pandemic, a third and smaller bump of work has emerged in the late evening. Microsoft’s researchers refer to this phenomenon as the “triple peak day.”

[…]

Several underlying phenomena are pushing up this third mountain of work. One is the flexibility of at-home work. For example, parents of young kids might interrupt their workday or cut it off early for school pickup, dinnertime, bedtime, and other child care. This leaves a rump of work that they finish up later. Other workers are night owls who get their second wind—or even their primary gust of creativity—just before bed.

[...]

Something else is pushing work into our evenings: White-collar work has become a bonanza of meetings. In the first months of the pandemic, Microsoft saw online meetings soar as offices shut down. By the end of 2020, the number of meetings had doubled. In 2021, it just kept growing. This year it’s hit an all-time high.

Source: The Rise of the 9 p.m. Work Hour | The Atlantic

My highlights from 'Drive Your Plow Over the Bones of the Dead'

This morning, I finished reading Drive Your Plow Over the Bones of the Dead, the translated name of Olga Tokarczuk’s 2009 novel, published a decade later in English.

I thought I’d share my five of the sections I highlighted, because it’s one of those books that, despite being a work of fiction, also has sections which describe well the human condition.

(I’ll also note that the book has made me more militantly vegetarian, which I didn’t see coming!)

It is at Dusk that the most interesting things occur, for that is when simple differences fade away. I could live in everlasting Dusk. (p.43)
When you walk past a shop window where large red chunks of butchered bodies are hanging on display, do you stop to wonder what it really is? You never think twice about it, do you? Or when you order a kebab or a chop – what are you actually getting? There’s nothing shocking about it. Crime has come to be regarded as a normal, everyday activity. Everyone commits it. That’s just how the world would look if concentration camps became the norm. Nobody would see anything wrong with them.’ (p.98)
For people of my age, the places that they truly loved and to which they once belonged are no longer there. The places of their childhood and youth have ceased to exist, the villages where they went on holiday, the parks with uncomfortable benches where their first loves blossomed, the cities, cafés and houses of their past. And if their outer form has been preserved, it’s all the more painful, like a shell with nothing inside it any more. I have nowhere to return to. It’s like a state of imprisonment. The walls of the cell are the horizon of what I can see. Beyond them exists a world that’s alien to me and doesn’t belong to me. (p.146)
The psyche is our defence system – it makes sure we’ll never understand what’s going on around us. Its main task is to filter information, even though the capabilities of our brains are enormous. For it would be impossible to carry the weight of this knowledge. Because every tiny particle of the world is made of suffering. (p.197)
Newspapers rely on keeping us in a constant state of anxiety, on diverting our emotions away from the things that really matter to us. Why should I yield to their power and let them tell me what to think? (p.235)
Source: Drive Your Plow Over the Bones of the Dead | Wikipedia

Mainstream social media is a behaviour-modification system

A couple of years ago I would have said that this analogy of an atom bomb being exploded over our information ecosystem is a bit extreme. Not now.

I’ve said this over and over, that, really, this is like when 140,000 people died instantly in Hiroshima and Nagasaki. The same thing has happened in our information ecosystem, but it is silent and it is insidious. This is what I said in the Nobel lecture: An atom bomb has exploded in our information ecosystem. And here’s the reason why. I peg it to when journalists lost the gatekeeping powers. I wish we still had the gatekeeping powers, but we don’t.

So what happened? Content creation was separated from distribution, and then the distribution had completely new rules that no one knew about. We experienced it in motion. And by 2018, MIT writes a paper that says that lies laced with anger and hate spread faster and further than facts. This is my 36th year as a journalist. I spent that entire time learning how to tell stories that will make you care. But when we’re up against lies, we just can’t win, because facts are really boring. Hard to capture your amygdala the way lies do.

[...]

Today we live in a behavior-modification system. The tech platforms that now distribute the news are actually biased against facts, and they’re biased against journalists. E. O. Wilson, who passed away in December, studied emergent behavior in ants. So think about emergent behavior in humans. He said the greatest crisis we face is our Paleolithic emotions, our medieval institutions, and our godlike technology. What travels faster and further? Hate. Anger. Conspiracy theories. Do you wonder why we have no shared space? I say this over and over. Without facts, you can’t have truth. Without truth, you can’t have trust. Without these, we have no shared space and democracy is a dream.

Source: Maria Ressa: How Disinformation Manipulates Elections | The Atlantic

Certain surroundings seem to dispel enchantment, and others encourage it

I really liked this article by Simon Sarris about what we grasp for versus what we get in domestic settings. I’m definitely receptive to the emotional (and even spiritual) aspects of our build environment at the moment, for some reason.

Handcrafted objects, textured colors, unpainted and unpolished surfaces (my walls show their raw plaster), natural materials, sunlight and shadow—all of these are signs of life. Life accepts the imperfect and the changing. The domestic need not be flamboyant—though sometimes it is magnificent to be so—after all my kitchen and Laquy’s are far from neon. But no kitchen or home should look lifeless. The design cues of the modern home are grasping at a kind of modernist perfectionism, and become flat because all life is removed in the process. Professional atmospheres (restaurant kitchens, warehouses, operating rooms) are antiseptic, often they need to be, so they simply banish life.

[…]

Intimacy is not clutter, but the proper demarcation of space. To lure back enchantment, we must learn to create the nook, to appreciate the wilder garden, to consider the power of shadows and small spaces, to welcome living materials over insensate ones. There is no formula that can easily arrive at intimacy, only a sensitivity to context that can be cultivated. If we look beyond the economic and utilitarian world, we will find a secret one waiting for us.

Source: Patina and Intimacy | Simon Sarris

Are we in a post-album era for music?

One of the downsides of getting older is that things you took to be sacred all of a sudden seem to be obsolete. For example, music albums, which have always been a part of my life, seem to now be referred to in the past tense?

There’s a whole Wikipedia article on the ‘album era’ so… it must be true.

The album era was a period in English-language popular music from the mid-1960s to the mid-2000s in which the album was the dominant form of recorded music expression and consumption. It was primarily driven by three successive music recording formats: the 33⅓ rpm long-playing record (LP), the audiocassette, and the compact disc. Rock musicians from the US and the UK were often at the forefront of the era, which is sometimes called the album-rock era in reference to their sphere of influence and activity. The term "album era" is also used to refer to the marketing and aesthetic period surrounding a recording artist's album release.
Source: Album era | Wikipedia

Warren Ellis' work day routine

I think the realisation that it’s impossible to ‘keep up’ (whatever that means) with even a subset of an industry these days may be the key to enlightenment.

One of the great things for me about Thought Shrapnel is that I can bookmark things I’d potentially go back and read. Then, if I do get the chance, I can share them here. It sounds like Ellis is doing something similar with his site.

I was telling someone the other day: I have become the old man who reads the papers in the morning and then watches the news analysis show on tv at night. The phone is now “the papers.”

[…]

I think I have only about eighty sites in my RSS reader these days, which generally generate some 150 new posts to read through. I should post an updated RSS list so I can see for myself.

My inputs used to be twenty times that, and constant from when I woke up to when I finally slept. That thing when you wake up with a shudder and reach for the phone because you’re behind the moment. But I suspect it took a pandemic and serial lockdowns for me to understand that, even when I was feeling good, it was like a motion detector alarm was going off in my head every second for eighteen hours a day. And you get so trained to it that when the alarms drop to just once every sixty seconds, you go looking for more input to bring the rate back up. I’ve been working hard to get past that

Source: Morning Routine and Work Day, Spring 2022 | WARREN ELLIS LTD

Image: Jon Tyson

Get off Twitter if you want to see your friends' posts

Tyler Freeman wrote a script to analyse the tweets he’s shown in his algorithmic Twitter timeline. 90% of his friends (i.e. the people he chose to follow) never made it to the main feed.

The diagram below shows the 90% in grey, withthe people he follows in orange, strangers are in blue, and ads are pink. This is what happens when you have software with shareholders.

I am following over 2,000 people, so to only see tweets from 10 percent of them is disconcerting; 90 percent of the people I intentionally follow, and want to hear from, are being ignored/hidden from me. When we dig deeper, it gets even worse.

[…]

The way I see it, the centralized path via government regulation is a short-term fix which may be necessary given the amount of power our current societal structures allot to social media corporations, but the long-term fix is to put the power into the hands of each user instead—especially considering that centralized power structures are how we got into this mess in the first place. I’m eager to see what this new world of decentralization will bring us, and how it could afford us more agency in how we donate our attention and how we manage our privacy.

Source: Does Twitter’s Algorithm Hate Your Friends? | Nightingale

Virtual Photographer Of The Year awards

I love Red Dead Redemption 2, and it’s great that the stunning vistas and scenery in the game is recognised.

During London Games Festival, a 'Red Dead Redemption 2' screenshot won a competition promoting the art of virtual photography.
Source: 'Red Dead Redemption 2' screenshot wins Virtual Photographer Of The Year

The future of the web, according to Mozilla

There’s nothing particularly wrong with this document. It’s just not very exciting. Maybe that’s OK.

Mozilla's mission is to ensure that the Internet is a global public resource, open and accessible to all. We believe in an Internet that puts people first, where individuals can shape their own experience and are empowered, safe, and independent.

The Internet itself is low-level infrastructure — a connective backbone upon which other things are built. It’s essential that this backbone remains healthy, but it’s also not enough. People don’t experience the Internet directly. Rather, they experience it through the technology, products, and ecosystems built on top of it. The most important such system is the Web, which is by far the largest open communication system ever built.

This document describes our vision for the Web and how we intend to pursue that vision. We don’t have all the answers today, and we expect this vision to evolve over time as we identify new challenges and opportunities. We welcome collaboration — both in realizing this vision, and in expanding it in service of our mission.

Source: Mozilla’s vision for the evolution of the Web

Historic aerial photos of England

It's annoying they can't be downloaded, but fun to see historic aerial photos of my home town!

You can explore over 400,000 digitised photos taken from our aerial photo collections of over 6 million photographs preserved in the Historic England Archive.
Source: Aerial Photo Explorer – Over 400,000 aerial photos in Historic England's digitised collections | Historic England

How to be a darknet drug lord

Wow, who knew how difficult it was to be a criminal? Found via HN.

Strip lights in darkness

You're an aspiring drug kingpin. Go out and pay cash for another computer. It doesn't have to be the best or most expensive, but it needs to be able to run Linux. For additional safety, don't lord over your new onion empire from your mother's basement, or any location normally associated with you. Leave your phone behind when you head out to manage your enterprise so you aren't tracked by cell towers. Last but not least for this paragraph, don't talk about the same subjects across identities and take counter-measures to alter your writing style.

[…]

Disinformation is critical to your continued freedom. Give barium meat tests to your contacts liberally. It doesn’t matter if they realize they’re being tested. Make sure that if you’re caught making small talk, you inject false details about yourself and your life. You don’t want to be like Ernest Lehmitz, a German spy during World War II who sent otherwise boring letters about himself containing hidden writing about ship movements. He got caught because the non-secret portion of his letters gave up various minor personal details the FBI correlated and used to find him after intercepting just 12 letters. Spreading disinformation about yourself takes time, but after a while the tapestry of deceptions will practically weave itself.

[…]

Take-away: If you rely only on tor to protect yourself, you’re going to get owned and people like me are going to laugh at you. Remember that someone out there is always watching, and know when to walk away. Do try to stay safe while breaking the law. In the words of Sam Spade, “Success to crime!"

Source: So, you want to be a darknet drug lord… | nachash

British monarchs helped fund, and profited from, the slave trade

The monarchy wasn’t a force for good during the age of colonialism/empire, nor is it a force for good now.

Map of slave trade routes

In 1660, the Royal African Company was established by the Duke of York, who later became James II, with involvement from his brother, Charles II. The Royal African Company was prolific within the slave trade; according to the Slave Voyages website, between 1672 and 1731 the Royal African Company transported more than 187,000 slaves from Africa to English colonies in North, Central and South America. Many of the enslaved Africans transported by the Royal African Company were branded “DY”, standing for Duke of York.

Between 1690 and 1807, an estimated 6 million enslaved Africans were transported from west Africa to the Americas on British or Anglo-American ships. The slave trade was protected by the royal family and parliament.

Source: What are the British monarchy’s historical links to slavery? | The Guardian

Live map of electricity production highlights carbon criminals

This live map of electricity production and consumption is really interesting, on a number of levels. First, it’s great that it exists! It really helps show, for example, that Poland needs to get its act together.

But also, design decisions matter. For example, the focus on carbon, while important, obscures the fact that nuclear might help get us out of the current mess but is really storing up problems for future generations.

Map showing Europe coloured different shades of green, yellow, orange, and red

electricityMap is a live visualization of where your electricity comes from and how much CO2 was emitted to produce it.
Source: electricityMap | Live CO₂ emissions of electricity consumption

Do NFTs tend towards dystopia?

At the weekend I visited the Moco Museum with my wife in Amsterdam. It’s the first time I’ve seen an NFT art exhibition. It wasn’t… bad? But, as someone commented when I said as much on social media, the ownership model is kind of irrelevant. It’s the digital art that matters.

(the animation below is from a video I took of an appropriate Beeple artwork)

What I think we’re all starting to realise is that for everything to be on the blockchain, we would need to fundamentally change the nature of human interaction. And that change would be toward dystopia.

In this article, James Grimmelmann, who is a professor at Cornell Law School and Cornell Tech (where he directs the Cornell Tech Research Lab in Applied Law and Technology) explains just this.

Loosely speaking, there are three kinds of property you could use an NFT to try to control ownership of: physical things like houses, cars, or tungsten cubes; information like digital artworks; and intangible rights like corporate shares.

By default, buying an NFT “of” one of these three things doesn’t give you possession of them. Getting an NFT representing a tungsten cube doesn’t magically move the cube to your house. It’s still somewhere else in the world. If you want NFTs to actually control ownership of anything besides themselves, you need the legal system to back them up and say that whoever holds the NFT actually owns the thing.

Right now, the legal system doesn’t work that way. Transfer of an NFT doesn’t give you any legal rights in the thing. That’s not how IP and property work. Lawyers who know IP and property law are in pretty strong agreement on this.

It’s possible to imagine systems that would tie legal ownership to possession of an NFT. But they’re (1) not what most current NFTs do, (2) technically ambitious to the point of absurdity, and (3) profoundly dystopian. To see why, suppose we had a system that made the NFT on a blockchain legally authoritative for ownership of a copyright, or of an original object, etc. There would still be the enforcement problem of getting everyone to respect the owner’s rights.

Grimmelmann clarifies in a footnote that just because NFTs might work for art, doesn’t mean they’re appropriate for… well, anything else:

A lot of the current hype around NFTs consists of the belief that the rest of the world will follow the same rules as NFT art. But of course part of the point of art is that it doesn’t follow the same rules as the rest of the world.
Source: I Do Not Think That NFT Means What You Think It Does | The Laboratorium

Hamiltonians and Jeffersonians

Cory Doctorow quite rightly calls out that Big Tech’s “too big to fail” status has created “oligopolistic power” which limits our choice over how we’re connected to the people we want to interact with.

I like his reference of Frank Pasquale’s two approaches to regulation. I guess I’m a Jeffersonian, too…

Every community has implicit and explicit rules about what kinds of speech are acceptable, and metes out punishments to people who violate those rules, ranging from banishment to shaming to compelling the speaker to silence. You’re not allowed to get into a shouting match at a funeral, you’re not allowed to use slurs when addressing your university professor, you’re not allowed to explicitly describe your sex-life to your work colleagues. Your family may prohibit swear-words at Christmas dinner or arguments about homework at the breakfast table.

One of the things that defines a community are its speech norms. In the online world, moderators enforce those “house rules” by labeling or deleting rule-breaking speech, and by cautioning or removing users.

Doing this job well is hard even when the moderator is close to the community and understands its rules. It’s much harder when the moderator is a low-waged employee following company policy at a frenzied pace. Then it’s impossible to do well and consistently.

[…]

It’s not that we value the glorious free speech of our harassers, nor that we want our views “fact-checked” or de-monetized by unaccountable third parties, nor that we want copyright filters banishing the videos we love, nor that we want juvenile sensationalism rammed into our eyeballs or controversial opinions buried at the bottom of an impossibly deep algorithmically sorted pile.

We tolerate all of that because the platforms have taken hostages: the people we love, the communities we care about, and the customers we rely upon. Breaking up with the platform means breaking up with those people.

It doesn’t have to be this way. The internet was designed on protocols, not platforms: the principle of running lots of different, interconnected services, each with its own “house rules” based on its own norms and goals. These services could connect to one another, but they could also block one another, allowing communities to isolate themselves from adversaries who wished to harm or disrupt their fellowship.

[…]

Frank Pasquale’s Tech Platforms and the Knowledge Problem poses two different approaches to tech regulation: “Hamiltonians” and “Jeffersonians” (the paper was published in 2018, and these were extremely zeitgeisty labels!).

Hamiltonians favor “improving the regulation of leading firms rather than breaking them up,” while Jeffersonians argue that the “very concentration (of power, patents, and profits) in megafirms” is itself a problem, making them both unaccountable and dangerous.

That’s where we land. We think that technology users shouldn’t have to wait for Big Tech platform owners to have a moment of enlightenment that leads to its moral reform, and we understand that the road to external regulation is long and rocky, thanks to the oligopolistic power of cash-swollen, too-big-to-fail tech giants.

Source: To Make Social Media Work Better, Make It Fail Better | Electronic Frontier Foundation

Cancel Technology

Noah Smith makes a good point in this article that ‘cancel culture’ has always existed, we just called it ‘social ostracism’. The difference is the technology we interact with, and the intended and unintended audiences with which we communicate.

First let’s think about distribution. In the olden days, you could “read the room” and decide whether you were going to get a sympathetic ear before you said something. You knew who you were hanging out with — your relatives, or your coworkers, or your buddies, or your neighbors, or your cell of the Communist Party, etc. On the internet, that’s much less true. On Twitter, anyone can see what you write and retweet it or screenshot it to millions of strangers all over the globe. In a Facebook group, you probably don’t know exactly what kind of others are in the group unless it’s really small. If you put something up on a website, anyone can read it. Etc.

The internet also makes it much less hard to maintain private spaces because text can be screenshotted and distributed widely. In the old days, if you said something that would be cancel-worthy outside the group of people you were talking to, it was impossible for someone to verifiably transmit that information outside the group — they could snitch on you, but it would be hearsay and you could deny it. But when you write something down, the text of what you wrote can be screenshotted and distributed widely to people that you didn’t expect to be watching you.

Now, this broad distribution has a number of effects. It makes it a lot harder to get together with your buddies in private and say racist or sexist stuff, because now one of them can betray you with a screenshot. Lots of people are probably pleased with that outcome.

But it also means that everyone who talks on the internet must always worry about their words being shown to someone who’s going to interpret it in an uncharitable way.

[…]

Thus, the internet changes Cancel Culture by massively increasing the number of people who can target you for ostracism. It’s a bit like living in a gossipy small town where you don’t know any of your neighbors — you don’t know who’s going to read what you write, so you don’t know how people are going to take what you say.

Source: It’s not Cancel Culture, it’s Cancel Technology | Noahpinion

Declining trust in society isn't just a 'vibe shift'

This is a wide-ranging and somewhat jumbled article which nevertheless has at its core a key point about the decline in trust in society. That’s not just a ‘vibe shift’ but a more permanent and worrying state of affairs.

Consider the Edelman Trust Barometer. The public relations firm has been conducting an annual global survey measuring public confidence in institutions since 2000. Its 2022 report, which found that distrust is now “society’s default emotion,” recorded a trend of collapsing faith in institutions such as government or media.

[…]

It’s difficult to imagine how trust in national governments can be repaired. This is not, on the face of it, apocalyptic. The lights are on and the trains run on time, for the most part. But civic trust, the stuff of nation-building, believing that governments are capable of improving one’s life, seems to have dimmed.

Source: What You’re Feeling Isn’t A Vibe Shift. It’s Permanent Change. | BuzzFeed News

Twitter autoblock is what you get when you have software with shareholders

I heard from a former colleague that they’d been ‘autoblocked’ on Twitter for responding snarkily to someone. I don’t have an account there any more, so had to look up what they meant.

This kind of algorithmic blocking is the exact opposite of what you’d want from a platform that genuinely cared about human, community-focused interaction. We need to avoid this kind of approach with the Bonfire Zappa project.

It’s the unaccountability of it that gets me. The algorithm is a black box.

Twitter is currently experimenting with a feature called Safety Mode that detects and blocks potentially harmful language or repetitive, unwelcome interactions.

Some things to know about autoblock

  • Autoblocks come from Twitter, not individuals.
  • Autoblocks last for 7 days, but can be undone by the account owner at any time.
  • There’s no limit to how long someone stays in Safety Mode.
  • Just like when someone blocks you, if you’re autoblocked, it won't be possible to interact with them, see their Tweets, follow them, or send them Direct Messages.
  • Existing replies from autoblocked accounts move to the bottom of the conversation.
Source: About autoblock by Twitter

Antartica used to be covered in rainforest

Given the news that both the Arctic and Antarctic are currently a lot warmer than expected, this is interesting news. Sea levels 170 metres higher than normal would mean that my house would be underwater…

A team from the UK and Germany discovered forest soil from the Cretaceous period within 900 km of the South Pole. Their analysis of the preserved roots, pollen and spores shows that the world at that time was a lot warmer than previously thought.

[…]

The work also suggests that the carbon dioxide (CO2) levels in the atmosphere were higher than expected during the mid-Cretaceous period, 115-80 million years ago, challenging climate models of the period.

The mid-Cretaceous was the heyday of the dinosaurs but was also the warmest period in the past 140 million years, with temperatures in the tropics as high as 35 degrees Celsius and sea level 170 metres higher than today.

[…]

They found that the annual mean air temperature was around 12 degrees Celsius; roughly two degrees warmer than the mean temperature in Germany today. Average summer temperatures were around 19 degrees Celsius; water temperatures in the rivers and swamps reached up to 20 degrees; and the amount and intensity of rainfall in West Antarctica were similar to those in today’s Wales.

Source: Traces of ancient rainforest in Antarctica point to a warmer prehistoric world | Imperial News | Imperial College London

San Francisco is built on the carcasses of old ships

Very cool. There’s a metaphor in there somewhere.

When the gold rush began in 1848, thousands of people sailed into California, hoping to strike it rich. The ships that sailed there were often just enough to get the crew there. Many would never sail again.

A large portion of the ships that landed in San Francisco Bay were simply left to rot as the crews they brought got caught up in gold fever. At the height of the gold rush, there were 500 to one thousand ships moored in the harbor, clogging up traffic and making the waters almost un-navigable.

The city needed land, and since most of it had already been built on, politicians devised a brilliant solution: start building on the water. The city started selling plots of bay water on the condition that the new owner would turn it into new land. So, ships were intentionally run aground and built into hotels and bars – they became part of the city.

Source: Why is San Francisco's Foundation is Built on Old Ships from the Mid-1800s? | Interesting Engineering

Solarpunk and five climate futures

In this interview with Andrew Dana Hudson, he lays out a brief overview of the five futures he discusses in his book. This, in turn, is based on his Masters thesis.

There’s a lot of optimism in solarpunk approaches to the future, which is attractive. We just need to have the will to realise that it’s not already over.

There is a very optimistic sustainable scenario, full of community and open-hearted kindness and capitalist power fading to a bad memory. But there’s also a scenario of overclocked consumerism, another of neo-feudal inequality, and a third of persistent military conflict and global breakdown. And a middle-of-the-road scenario in which, like today, we slowly make some progress but never, ever enough.

[…]

Solarpunk also seems to me a bridge toward a future of energy abundance. It’s funny, given how much solarpunk is (wonderfully) influenced by crusty degrowthers and permacultural downshifters, but it’s possible that if we keep building renewables the way we’re projected to over the next decade or so, we might end up with access to way more energy than human beings have ever had to work with (at least during the day). What do we do with that? Those electrons have to go somewhere. Well, we’ll need a lot of energy to remove a Lake Michigan’s worth of carbon out of the atmosphere, in order to stabilize the climate and roll back ocean acidification. Call that Big Chemistry. But probably there’s room for Big Computation and Big Culture as well. If solar energy becomes cheaper than free, how does that broaden our artistic ambitions? And what does that sometimes-post-scarcity mindset mean for how we treat each other?

Source: Our Shared Storm: An Interview with Andrew Dana Hudson – Solarpunk Magazine

If you believe it's over, maybe it will be

A few weeks ago, I linked to Nesta’s predictions for 2022. One of them, climate inactivism, is a form of nihilism and helplessness I also see in relation to the current war in Ukraine.

The historian in me knows that nothing is inevitable. But it’s hard to feel that one has any ability to shape or contribute to things that require aligned geopolitical will — especially while slowly crawling out of a pandemic bunker.

Just about everyone these days is a grumpy old man, obsessed with decline and convinced of its inevitable loom. More than that, the acolytes of this cult, who are everywhere, are deeply suspicious that anything could ever be as good again. The problem with the cult of decline is that it presupposes a certain fixed point of view... and is deliberately blind to all others.

[…]

From the point of view of a barnacle, the whole of the ocean seems to rise and fall. Stand long enough before the tides and eventually you’ll see the erosion of the shore. But what are the tides from inside the ocean? The erosion of one beach leads to its deposition elsewhere. What is decline but the birth of something new?

The cult of decline worships a tautology, a cognitive bias, a bank run. If you believe things are failing, then you won’t expend the energy necessary to sustain them, and they will fail. In a universe like ours, dominated by the Second Law of Thermodynamics, some creative vitality is necessary to sustain anything, even a good mood.

The Roman Empire didn’t succumb to insurmountable tidal forces. There was no tsunami of decrepitude that wiped it away. Everything it faced at the end—wars, barbarians, epidemics—was something it had conquered at least twice before. The difference was that its people no longer cared enough to overcome them. They believed it was over, and so it was.

Source: Thankfully, Everything is Doomed | Rick Wayne

Challenging capitalism through co-ops and community

The glossy Instagram lifestyle is actually led by a fraction of a fraction of 1% of the world’s population. Instead of us all elbowing each other out of the way in pursuit of that, this article points to a better solution: co-operation.

There are two types of economics active in the world right now — which basically means two radically divergent varieties of economic life. The first is economics as most economists and writers see it and talk about it. The second is economics as most people live it.

Call the first “the top-up.” It’s the economics of competition and asymmetrical knowledge and shareholder value and creative destruction. It’s the dominant system. We know all about the top-up. Tales of the doings of the top-up economy are mainlined into our brains from business articles, financial analysis, stories about our planet’s richest people or corporations or nations. Bezos. Buffett. Gates. Musk. Zuckerberg. The Forbes 400. The Fortune 500. The Nasdaq. The Nikkei. On and on.

Call the second “the bottom-down.” We don’t hear as much about it because it’s a lot less sexy and a lot more sticky. It involves survival mechanisms and community solidarity and cash-in-hand calculations.

But it’s the economic system of the global majority, and this makes it the more important of the two.

[…]

The top-up economic sphere functions like a gated community in which people who have money can pretend that everything they do and have in life is based on merit, and that the communal and cooperative boosts from which they profit are nothing but natural outgrowths of that merit.

[…]

Change always comes from below — and it is in the bottom-down relationships where growth and egalitarianism can flourish. Every volunteer fire department is a community platform. Every mutually managed water system demonstrates that neighbors can build things when they need each other. Every community-based childcare network or parent-teacher association is a nascent collective. Every civic association, neighborhood or church council, social action network or food pantry gives people a broader perspective. Every collectively run savings and credit association demonstrates that communal trust can give people a leg up.

Source: Co-ops And Community Challenge Capitalism | Noema

Some fairy tales may be 6,000 years old

It’s fascinating to think that children’s stories may have been told and re-told across languages and cultures for millennia. It just goes to show the power of narrative structure!

Fairy tales are transmitted through language, and the shoots and branches of the Indo-European language tree are well-defined, so the scientists could trace a tale's history back up the tree—and thus back in time. If both Slavic languages and Celtic languages had a version of Jack and the Beanstalk (and the analysis revealed they might), for example, chances are the story can be traced back to the "last common ancestor." That would be the Proto-Western-Indo-Europeans from whom both lineages split at least 6800 years ago. The approach mirrors how an evolutionary biologist might conclude that two species came from a common ancestor if their genes both contain the same mutation not found in other modern animals.

[…]

Tehrani says that the successful fairy tales may persist because they’re “minimally counterintuitive narratives.” That means they all contain some cognitively dissonant elements—like fantastic creatures or magic—but are mostly easy to comprehend. Beauty and the Beast, for example, contains a man who has been magically transformed into a hideous creature, but it also tells a simple story about family, romance, and not judging people based on appearance. The fantasy makes these tales stand out, but the ordinary elements make them easy to understand and remember. This combination of strange, but not too strange, Tehrani says, may be the key to their persistence across millennia.

Source: Some fairy tales may be 6000 years old | AAAS

A weird tip for weight loss

Hacker News isn’t just a great resource for tech-related news. The ‘Ask HN’ threads can also be a wonderful source of information or just provide different ways of thinking about the world.

In this example, the top-voted answer to a question about weight loss had me thinking about gut bacteria ‘craving’ sugar. Weird, but a useful framing.

This is a weird tip I think I could only share with the hacker news crowd. Once I learned about gut bacteria I started thinking of my cravings as something external to me. Like instead of saying "I'm hungry and I'm in the mood for something sweet" I would realize "the hormone ghrelin is sending hunger signals to my brain and the gut bacteria in my body is asking for something that's not actually in my best interest." Being able to emotionally distance myself from my feelings let me make decisions that I knew were better for me.
Source: Ask HN: Any weird tips for weight loss? | Hacker News

The week as an human construct

This article in Aeon was published at around the same time as I published a post on my personal blog about time as a human construct. In that post, I talked about the French Republican calendar and the link between it and the weather.

What’s interesting in this article is that the author, David Henkin, a history professor, talks about the success of the week as being because it’s not attached to religious, cultural, or climatological norms.

Weeks serve as powerful mnemonic anchors because they are fundamentally artificial. Unlike days, months and years, all of which track, approximate, mimic or at least allude to some natural process (with hours, minutes and seconds representing neat fractions of those larger units), the week finds its foundation entirely in history. To say ‘today is Tuesday’ is to make a claim about the past rather than about the stars or the tides or the weather. We are asserting that a certain number of days, reckoned by uninterrupted counts of seven, separate today from some earlier moment.

[…]

The modern week has superimposed upon the ancient week a rhythm that is fundamentally social, incorporating an awareness of the demands and constraints of other people. Yet the modern week is also somewhat individualised, inasmuch as its rhythms are shaped by all sorts of private decisions we make, especially as consumers. Whereas Sabbath counts and astrological dominions subject everyone to the same schedule, the modern week makes us aware of our relationship to our networks and to the habits of others, while simultaneously highlighting the variety of our networks and the contingency of those habits.

Source: How we came to depend on the week despite its artificiality | Aeon

The Un-Grammable Hang Zone

Instagram has never been a place I’ve ever wanted to spend any time or attention. But its impact on physical spaces is undeniable.

This post (newsletter issue?) by Drew Austin cites a couple of other authors who perfectly skewer the Instagram aesthetic as being a grammar that quickly conveys that somebody… did a thing.

Living my best life

The Blackbird Spyplane newsletter recently made a valuable contribution to the pantheon of essays about how the internet has transformed the physical world: a hopeful manifesto in praise of the “Un-Grammable Hang Zone,” the definition of which will be obvious if you’ve spent enough time in the Instagram-optimized settings that have proliferated in cities during the past decade—places that BBSP describes as a “high-efficiency, low-humanity kind of eatery where you point yr phone at a QR code and do contactless payment before eating a room-temp grain bowl under a pink neon sign that says ‘Living My Best Life’ in cursive.”

[…]

Affirming the interchangeability of “millennial” and “Instragrammable” as descriptors, Fischer pinpoints the force that really drives them: Instagrammable “does not mean ‘beautiful’ or even quite ‘photogenic’; it means something more like ‘readable.’ The viewer could scroll past an image and still grasp its meaning, e.g., ‘I saw fireworks,’ ‘I am on vacation,’ or ‘I have friends.’” If Instagram as a medium demands readability, in other words, it puts pressure on the physical environment to simplify itself accordingly, at least in the long run.

Source: #178: I Can See It (But I Can’t Feel It) | Kneeling Bus

A hardwired obedience to the capitalist system that we exist within

I’m not sure where I came across this, but Ian Nesbitt is undertaking a modern pilgrimage on a recently-uncovered medieval route from Southampton to Canterbury.

He talks about the ‘inner journey’ as well as the actual one-foot-in-front-of-another journey. Sounds interesting, so I’ve added his blog to my feed reader.

Then there was a pandemic. During that period, Iike many others, I found myself looking inwards and, in the relative stasis of those months, began to question parts of myself that I never questioned before, in particular the drive to progress and keep moving on to the next thing, to keep producing. I began to wonder if that wasn’t just part of my character, so to speak, but actually a hardwired obedience to the capitalist system that we exist within.
Source: Pilgrimage #1: the adequate step | The Book of Visions

What if I never change?

Oliver Burkeman on Jocelyn K. Glei’s Hurrry Slowly is an absolute treat. In particular, he quotes Jim Benson on how we can easily become “a limitless reservoir for other people’s expectations”. I also liked the discussion around the “internalised capitalism” of “clock time”.

The title comes from an important point that Burkeman makes about so many of our hopes and dreams being based on somehow in the future being a radically different person to who we are now.

It reminded me of a section in Alain de Botton’s The Art of Travel in which he summarises Seneca by saying that the problem about going somewhere to escape things is that you always take yourself (and your mental/emotional baggage) with you…

Oliver Burkeman on why we try to control time, how perfectionism holds us back, and the problems with a “when-i-finally” mindset.
Source: Oliver Burkeman: What if I never change? | Hurry Slowly

Switching from Telegram to Signal

Like many people in a relationship, I have a persistent backchannel with my wife. I have never used WhatsApp, and so we ended up using Telegram. After reading this article from the EFF, an organisation I donate to on a monthly basis, we’ve switched to Signal.

My wife’s family moved to Signal after one of the privacy debacles around data sharing between WhatsApp and Facebook. Many people I know have switched from Telegram to Matrix for group chat.

So the only people left on Telegram that I contact regularly are my parents, my sister, and a few random people I probably haven’t messaged for a while…

If you do not have [Telegram’s] secret chat turned on, your chat communications can be exposed or seen just like channels and groups. If you do turn on secret chat, then Telegram cannot see the contents of your communication, but they still have access to metadata about the communications, including who you talked to and when you talked to them. It may be possible to draw very specific conclusions about what you are doing based only on the metadata about your conversation.

Source: Telegram Harm Reduction for Users in Russia and Ukraine | Electronic Frontier Foundation

AI-synthesized faces are here to fool you

No-one who’s been paying attention should be in the last surprised that AI-synthesized faces are now so good. However, we should probably be a bit concerned that research seems to suggest that they seem to be rated as “more trustworthy” than real human faces.

The recommendations by researchers for “incorporating robust watermarks into the image and video synthesis networks” are kind of ridiculous to enforce in practice, so we need to ensure that we’re ready for the onslaught of deepfakes.

This is likely to have significant consequences by the end of this year at the latest, with everything that’s happening in the world at the moment…

Synthetically generated faces are not just highly photorealistic, they are nearly indistinguishable from real faces and are judged more trustworthy. This hyperphotorealism is consistent with recent findings. These two studies did not contain the same diversity of race and gender as ours, nor did they match the real and synthetic faces as we did to minimize the chance of inadvertent cues. While it is less surprising that White male faces are highly realistic—because these faces dominate the neural network training—we find that the realism of synthetic faces extends across race and gender. Perhaps most interestingly, we find that synthetically generated faces are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, synthetically generated faces have emerged on the other side of the uncanny valley. This should be considered a success for the fields of computer graphics and vision. At the same time, easy access (https://thispersondoesnotexist.com) to such high-quality fake imagery has led and will continue to lead to various problems, including more convincing online fake profiles and—as synthetic audio and video generation continues to improve—problems of nonconsensual intimate imagery, fraud, and disinformation campaigns, with serious implications for individuals, societies, and democracies.
Source: AI-synthesized faces are indistinguishable from real faces and more trustworthy | PNAS

Lizard brain vs infinite scroll

It’s funny that the author of this article uses Reddit’s app as an example of the problems with infinite scroll, as it’s the app I’ve most recently deleted on my phone. I installed it because I had to in order to continue reading a particular subreddit that I needed access to, but then the front page is just, so interesting for easily-distracted people (i.e. all of us) that I had delete it a few days later.

As a parent, there are some apps I don’t allow my kids to access at all, and other ones I kind of tolerate if they access them through the browser. The combination of notifications and infinite scroll is a dangerous drug for the mind.

When I take a minute to think about the things I enjoy doing with my devices, it helps me realize that they’re the ones where I’m deliberately using it. Talking to people I know, for example. Watching that movie I had been looking forward to. Looking up the origin of an oddly spelled word. Creating, rather than just consuming, and using it as a tool to improve my life, even if that little improvement is a one word answer to a tiny question that had been bugging me.

[…]

I don’t have any quick fixes or easy answers. I’ve struggled with this for a very long time. I’ve gotten much better at dealing with it, but I find I have to remain conscious of it. That’s where admitting defeat helps; I know how my brain works, and I can work with it. Let’s not install that app with the infinite scroll, since we can probably get by with just the mobile web version. Let’s not log in, unless there’s a reason you need to, since they’re after you with recommendations for your account. Let’s try to be conscious of how much time you end up spending on certain sites.

Source: My lizard brain is no match for infinite scroll | Caffeinspiration

Xero starts using consent-based decision making

Sociocracy, which includes consent-based decision making, is something we use at WAO. I’ve written about it several times on my personal blog as well as here.

It looks like the approach is working not just for yogurt-knitting vegans who work in co-operatives, but hard-nose businesses like Xero. Who knew?

For the past year, I’ve been lucky enough to work on a big technology project at Xero, where I spend much of my time supporting the leadership team. But it didn’t take long for me to realise that when you have a large number of people bringing their own perspectives and opinions into a complex situation, consensus is going to be challenging (if not impossible).

So I decided to introduce consent-based decision making to the leadership team. It’s something that a colleague introduced me to last year. It’s had such a positive impact on my work that I thought I’d share more about it, in the hope that you can use this simple technique in your team as well.

Source: Making better, faster decisions that are good enough for now | Bonnie Slater

What makes writing more readable?

I had the pleasure of interviewing Georgia Bullen, Executive Director of Simply Secure yesterday. I noticed that her website links to an active RSS feed from her Instapaper account, which I immediately added to my feed reader.

My first gleaning from that feed came today, when I came across this clever website which not just explains, but shows how to make writing more readable. Highly recommended.

Technology alone isn’t the answer. Even the most thoughtful algorithms and robust data sets lack context. Ultimately, the effectiveness of plain language translations comes down to engagement with your audience. Engagement that doesn’t make assumptions about what the audience understands, but will instead ask them to find out. Engagement that’s willing to work directly with people with disabilities or limited access to education, and not through intermediaries. As disabled advocates and organizations led by disabled people have been saying all along: “Nothing about us without us.”
...and the plain language version:
Source: What makes writing more readable? | pudding.cool

Audrey Watters on the technology of wellness and mis/disinformation

Audrey Watters is turning her large brain to the topic of “wellness” and, in this first article, talks about mis/disinformation. This is obviously front of mind for me given my involvement in user research for the Zappa project from Bonfire.

In February 2014, I happened to catch a couple of venture capitalists complaining about journalism on Twitter. (Honestly, you could probably pick any month or year and find the same.) “When you know about a situation, you often realize journalists don’t know that much,” one tweeted. “When you don’t know anything, you assume they’re right.” Another VC responded, “there’s a name for this and I think Murray Gell-Mann came up with it but I’m sick today and too lazy to search for it.” A journalist helpfully weighed in: “Michael Crichton called it the ”Murray Gell-Mann Amnesia Effect," providing a link to a blog with an excerpt in which Crichton explains the concept.
Source: The Technology of Wellness, Part 1: What I Don't Know | Hack Education

Offline for 3 days

David Cain took three days offline. It sounds like something that wouldn’t have been amazing 15 years ago, but these days goes straight to the front page of Hacker News.

I can understand why it’s weird to live in the hybrid world of being middle-aged and being alive before everything and everyone was online. But the big thing we need to do is to help the next generations understand that there is an offline world which is rich and worthwhile.

This simplicity was disorienting in a way. Many times a day I would finish whatever activity I was doing, and realize there was nothing to do but consciously choose another activity and then do that. This is how I made my first bombshell discovery: I take out my phone every time I finish doing basically anything, knowing there will be new emails or mentions or some other dopaminergic prize to collect. I have been inserting an open-ended period of pointless dithering after every intentional task.
Source: Raptitude.com – Getting Better at Being Human

Facebook is dying

While I only deleted my Twitter account at the end of last year, it’s been about 12 years since I deleted my Facebook one. As Cory Doctorow points out, it’s a terrible organisation that no-one should work for, and whose products no-one should use.

Facebook logo and stock ticker going down

Even before its stock fell off a cliff, Facebook was mired in a multi-year hiring crisis. Nobody wanted to work for Facebook because it’s a terrible company that makes terrible products that everyone hates and only use because the company has rigged the system to punish users for switching.

Facebook was already paying a wage premium, offering sweeteners to in-demand workers in exchange for checking their consciences at the door. Those sweeteners mostly took the form of shares, which means that all those morally flexible “Metamates” got a hefty pay-cut when the company’s stock price fell off a cliff. Expect a lot of them to leave – and expect the company to have to pay even more to replace them. Companies with falling share prices can’t use share grants to attract workers.

Facebook is now famously trying to pivot (ugh) to virtual reality to save itself. It’s an expensive gambit. It’s going to alienate a lot of its users. It’s going to alienate a lot of its in-demand workers. It’s going to freak out a lot of regulators.

Meanwhile, the switching costs for people who want to jump ship keep getting lower. It’s not merely that fewer and fewer of the people you want to talk with are still on Facebook. Even if there’s someone whose virtual company you can’t bear to part with, lawmakers in the US and Europe are working on legislation that would force Facebook to allow third parties to “federate” new services with it. That would mean that you could quit Facebook and join an upstart rival – say, one by a privacy-respecting nonprofit or even a user-owned co-op – and still exchange messages with the communities, customers and family you left behind on Facebook’s sinking ship.

Source: I’ve been waiting 15 years for Facebook to die. I’m more hopeful than ever | The Guardian

The hard part of the work is doing the work

I am thankful every working day that I set up a co-operative with friends and former colleagues so that while I’m in control of my own destiny, I also have awesome people to work alongside.

Freelancing is like having a job without a boss (alas)

Well, you still have a boss. It’s you. And you might not be a good one. Freelancers spend part of their day doing the work, and the rest of the time earning better clients.

Source: Common pitfalls and myths of the new economy | Seth’s Blog

AI cannot hold copyright (yet)

I think common sense would suggest that copyright should only apply to human-created works. But the line between what human brains and artificial ones do when working together is a thin one, so I don’t think this ruling is the last word.

A Recent Entrance to Paradise is part of a series Creativity Machine produced on the subject of a near-death experience. Thaler said the work “was autonomously created by a computer algorithm running on a machine,” according to court documents.

The U.S. Copyright review board said that this goes against the basic tenets of copyright law, which suggest that the work must be the product of a human mind. “Thaler must either provide evidence that the Work is the product of human authorship or convince the Office to depart from a century of copyright jurisprudence. He has done neither,” wrote the review board in its decision.

Source: U.S. Copyright Office Rules That AI Cannot Hold Copyright | ARTnews.com

Technology and productivity

Julian Stodd’s personal realisation that what the people who make ‘productivity tools’ want and what he wants might be two different things.

See also: Four Thousand Weeks: Time Management for Mortals by Oliver Burkeman

I fear that the suites of tools and features that allow me to work from anywhere do, in fact, distract me everywhere.

I feel that at time i have lost the art of long form and collapsed into the conversational and reactive.

[…]

Does technology always make us more productive – or can technology hold us apart? Do we need to be together to forge culture, and to find meaning, or can being together make us more busy than wise?

I suspect my personal (and perhaps our organisational) challenge is one of separation: to separate out my segregated spaces – to separate my thinking and doing, my learning and acting, my reflection and practice.

Source: The Delusion of Productivity | Julian Stodd’s Learning Blog

Hacking the application process

It’s perhaps a massive over-simplification, but my understanding of the so-called ‘skills gap’ is that two things are happening.

The first is a long-term trend for employers expecting to have to spend zero dollars on training for the people they hire.

The second is the use of algorithmic scanning of CV-scanning software to reject the majority of applicants. Not surprisingly, although it might make recruiters' jobs a bit more manageable, it’s not great for diversity or finding people who haven’t done that exact job before.

Software can also disadvantage certain candidates, says Joseph Fuller, a management professor at Harvard Business School. Last fall, the US Equal Employment Opportunity Commission launched an initiative to examine the role of artificial intelligence in hiring, citing concerns that new technologies presented a “a high-tech pathway to discrimination.” Around the same time, Fuller published a report suggesting that applicant tracking systems routinely exclude candidates with irregularities on their résumés: a gap in employment, for example, or relevant skills that didn’t quite match the recruiter’s keywords. “When companies are focused on making their process hyperefficient, they can over-dignify the technology,” he says.
Source: How Job Applicants Try to Hack Résumé-Reading Software | WIRED

You cannot 'solve' online misinformation

Matt Baer, who founded the excellent platform write.as, weighs in on misinformation and disinformation.

This is something I’m interested in anyway given my background in digital literacies, but especially at the moment because of the user research I’m doing around the Zappa project.

Seems to me that a space made up of humans is always going to have (very human) lying and deception, and the spread of misinformation in the form of simply not having all the facts straight. It's a fact of life, and one you can never totally design or regulate out of existence.

I think the closest “solution” to misinformation (incidental) and disinformation (intentional) online is always going to be a widespread understanding that, as a user, you should be inherently skeptical of what you see and hear digitally.

[…]

As long as human interactions are mediated by a screen (or goggles in the coming “metaverse”), there will be a certain loss of truth, social clues, and context in our interactions — clues that otherwise help us determine “truthiness” of information and trustworthiness of actors. There will also be a constant chance for middlemen to meddle in the medium, for better or worse, especially as we get farther from controlling the infrastructure ourselves.

Source: “Solving” Misinformation | Matt

The life run by spreadsheet is not worth living

When work is the most significant thing in your life, you optimise for it. When relationships are are the most significant things in your life, you optimise for those.

I find this post by ‘crypto engineer’ Nat Eliason a bit tragic, to be honest. He says he’s almost always working, there’s zero mention of family, and he says that all of his friends are people who are hustling too.

As Socrates didn’t say, “the life run by spreadsheet is not worth living”.

Here’s the biggest thing to keep in mind when you’re reading about my process:

I’m almost always working.

This is not some Tim Ferrissian “here’s how to work 2 hours a day and make lots of money” post. I tried that. It sucks. You’ll get depressed in about two days if you have an ounce of ambition in you. If you’re trying to optimize around working less, find better work.

It doesn’t mean, though, that I’m always doing things that feel like work. It means I enjoy the work that I do, and I’ve found ways to make my hobbies productive.

Source: How to Be Really, Really, Ridiculously Productive | Nat Eliason

The benefits of taking Wednesdays off

Today is a Wednesday and I’m taking a half-day off today and tomorrow as it’s half-term for the kids. But, pre-pandemic I used to take Wednesday off in its entirety which was absolutely amazing and I’m not sure why I don’t still do it.

There’s a real movement growing at the moment for a four-day week, which I think is a really positive thing for humanity. Let’s just hope it’s not just white collar workers who can afford to reap the benefits.

One-offs, like a deadline for a big project, may temporarily restructure our lives, but cyclical pacers, like a two-day weekend followed by a five-day work week, have outsized psychological influence, partially because of repetition, and partially because they mimic the cyclical natural of our most fundamental pacer—day and night.

[…]

A Wednesday holiday interrupts the externally imposed pacer of work, and gives you a chance to rediscover your internal rhythms for a day. While a long weekend gives you a little more time on your own schedule, it doesn’t actually disrupt the week’s pacing power. A free Wednesday builds space on either side, and shifts the balance between your pace and work’s—in your favor.

Source: For Maximum Recharge, Take a Wednesday Off | Quartz

Dark patterns and gambling

Given that most gambling these days happens via smartphone apps, and that the psychological tricks used by gambling firms are also used by, for example, for-profit centralised social media sites, I found this fascinating (and worrying!)

Person climbing up a stack of dice

Kim Lund, founder of poker game firm Aftermath Interactive, has made a career out of game design and has seen at first-hand how cold, hard probability defeats the illogical human mind every time – and allows the gambling companies to cash in. “All gambling games are based on psychological triggers that mean they work,” he tells me. “The human brain is incapable of dealing with randomness. We’re obsessed with finding patterns in things because that prevents us from going insane. We want to make sense of things.”

[…]

In her 1975 paper The Illusion of Control, Ellen J Langer conducted a series of experiments that showed that our expectations of success in a game of chance vary, depending on factors that do not actually affect the outcome. One of the variables that makes a big difference to how gamblers behave is the introduction of an element of choice. In one of Langer’s experiments, subjects were given lottery tickets with an American football player on them. Some subjects got to choose which player they wanted, others were allocated a ticket at random. On the morning of the draw, everyone was asked how much they would be prepared to sell their ticket for. Those who had chosen their ticket demanded an average of $8.67, while those who had been allocated one at random were prepared to give it up for $1.96.

Source: What gambling firms don’t want you to know – and how they keep you hooked | Thee Guardian

Speeding up a Chromebook by allocating zram

Pixelbook

Oddly enough, in the few days since I've bookmarked this URL, it's disappeared. Thank goodness for the Internet Archive!

I'll post the main details below, which are instructions for making Chromebooks run faster by allocating compressed cache. Note that on my Google Pixelbook (2017) I used '4000' instead of the '2000' recommended and it's really made a difference

Also see: Cog - System Info Viewer

You use zram (otherwise known as compressed cache - compcache). With a single command you can create enough zram to compensate for your device's lack of physical RAM. You can create as much compcache as you need; but remember, most Chromebooks contain smaller internal drives, so create a swap space that doesn't gobble up too much of your physical drive (as swap is created using your Chromebook internal, physical drive).

To create compcache, you must work within Crosh (Chromebook shell), aka the command line. Believe it or not, the command use for this is incredibly simple; but the results are significant (especially in cases where you're frequently running out of memory).

[...]

The first thing you must do is open a Crosh tab. This is simple and doesn't require anything more than hitting the key combination [Ctrl]+[Alt]+[t]. When you find yourself at crosh> you know you're ready to go.

The command to create swap space is very simple and comes in the form of:

swap enable SIZE

Where SIZE is the size of the swap space you wish to create. The ChromeOS developers suggest adding a swap of 2GB, which means the command would be:

swap enable 2000

Once you've run the command, you must then reboot your Chromebook for the effect to take place. The swap will remain persistent until you run the disable command (again, from Crosh), like so:

swap disable

No matter how many times you reboot, the swap will remain until you issue the disable command.

How to prevent a Chromebook from running out of memory | TechRepublic (archive.org link)

Stone Age culture in the Orkney islands

When I was eight years old, we took a family trip to the Orkney islands off the north coast of Scotland. I don’t know why we went there particularly, but it was amazing. I almost don’t want to go back because it might break the spell the place has cast over my life.

While were were there, with no kind of tourist fanfare I was allowed to handle skulls that were thousands of years old, crawl into tombs, and generally really experience history. I doubt they have such a cavalier approach to artefacts these days…

Neolithic stone circle

If you happen to imagine that there’s not much left to discover of Britain’s stone age, or that its relics consist of hard-to-love postholes and scraps of bones, then you need to find your way to Orkney, that scatter of islands off Scotland’s north-east coast. On the archipelago’s Mainland, out towards the windswept west coast with its wave-battered cliffs, you will come to the Ness of Brodgar, an isthmus separating a pair of sparkling lochs, one of saltwater and one of freshwater. Just before the way narrows you’ll see the Stones of Stenness rising up before you. This ancient stone circle’s monoliths were once more numerous, but they remain elegant and imposing. Like a gateway into a liminal world of theatricality and magic, they lead the eye to another, even larger neolithic monument beyond the isthmus, elevated in the landscape as if on a stage. This is the Ring of Brodgar, its sharply individuated stones like giant dancers arrested mid-step – as local legend, indeed, has it.
Source: ‘Every year it astounds us’: the Orkney dig uncovering Britain’s stone age culture | The Guardian

Upgrading an iPod Video for use in 2022

I’m an OG when it comes to MP3 players, having owned an Archos MP3 Jukebox while I was at uni in about 2001. It was ridiculously expensive for me as a student, but I was working at HMV at the time, and I was (and still am!) really into music.

In the end, I ‘upgraded’ the battery in it and managed to melt the entire thing, then switched to Spotify for all of my music in 2009. But there’s definitely part of me that wants to get back to what I would consider ‘real’ music listening.

While I do have plenty of MP3s and FLAC files on my smartphone, there’s just something about having a separate device for music. And you don’t get more iconic than an iPod. So this project is super-cool and once again has me thinking…

See also: How To Enjoy Your Own Digital Music

See also: ListenBrainz

I realised something not so long ago - I was being very lazy. I'd often just play my weekly/daily mix, or some playlist I made up a long time ago. I'd never really think about what music I liked + what music I wanted to listen to. I think this is in part due to the fact that almost any music was available - which made choosing even more difficult.

Anyway. Over the weekend I took apart a 5.5th gen iPod Classic (or iPod Video) and made it suit 2022 a little better :D

Source: Building an iPod for 2022 | Ellie.wtf

Digital to analogue and back again

It’s good to have Warren Ellis back. I have no opinion on this other than we should believe women when they accuse men of abuse.

His reflections on going analogue in 2021 and then coming back to digital workflows is interesting.

Someone sent me this article the other day, and here’s the quote we both independently flagged from it:

“But just because something makes waves on Twitter doesn’t mean it actually matters to most people. According to the Pew Research Center, only 23 percent of U.S. adults use Twitter, and of those users, “the most active 25% … produced 97% of all tweets.” In other words, nearly all tweets come from less than 6 percent of American adults. This is not a remotely good representation of public opinion, let alone newsworthiness, and treating it as such will inevitably result in wrong conclusions."

I’m not as up to date on some things as I used to be, but, framing it like that — what am I really missing? Value is not necessarily intrinsic to a digital service (or most other things). We choose to invest these things with value. And sometimes we’re too caught up in the stream to reframe these things and do a proper test on them. It doesn’t feel right to celebrate snapping out of long-term behavioral loops that one allowed to form in the first damn place. One just gets it done and then keeps getting it done until it’s better, I think.

There’s a tech industry term: dogfooding. It means using your own product or service. The inventor of Twitter fucks off to silent tech-free meditation retreats for weeks at a time. How was that not a red flag?

Source: Going Analogue, Returning To Digital – WARREN ELLIS LTD

Chrome OS Flex

About 18 months ago, Google acquired Neverware, a company who took the open source version of Chrome OS and customised it for the schools market.

The new version of Chrome OS, called ‘Flex’, can be installed on pretty much any device and also includes Linux containers. Interesting!

Google is positioning Chrome OS Flex as an answer to old Mac and Windows PCs that might not be able to handle the latest version of their native OS and/or that might not be owned by folks with budgets to replace the devices. Rather than buying new hardware, consumers or IT departments could install the latest version of Chrome OS Flex.
Source: Google turns old Macs, PCs into Chromebooks with Chrome OS Flex | Ars Technica

OKRs as institutional memory

Rick Klau, formerly of Google Ventures, is a big fan of OKRs (or ‘Objectives and Key Results’). They’re different from KPIs (or ‘Key Performance Indicators’) for various reasons, including the fact that they’re transparent to everyone in the organisation, and build on one another towards organisational goals.

In this post, Klau talks about OKRs as a form of organisational memory, which is why he’s not fond of changing them half-way through a cycle just because there’s new information available.

Let’s not distract ourselves just because someone had a good idea on a Tuesday standup meeting; let’s finish the stuff we said we were going to do. We might not succeed at all of it. In fact, we probably won’t, but we’ll have learned more and more. You can encode that. That becomes part of the institutional memory at the organization. (link and emphasis mine)
Source: OKRs as institutional memory | tins ::: Rick Klau's weblog

Nesta's predictions for 2022

Nesta shares its ‘Future Signals’ for 2022, some predictions about how things might shake out this year. I’d draw your attention in particular to climate inactivism coupled with quantifying carbon, as well as health inequalities around the quality of sleep.

Under the microscope this year we look at topics that range from sleep as a new dimension of health inequality to where our food will be grown in future. We ask complicated questions too. Is carbon counting really a tool for behaviour change? How will Covid-related service closures impact families? Our Nesta authors don’t offer up easy answers, but this collection should help you to distinguish the signal from the noise in 2022 and beyond.
Source: Future Signals – what we're watching for in 2022 | Nesta

Medieval Fantasy City Generator

The history geek in me loves this so much. And the educator interested in digital literacies loves the fact that you have to manipulate the URL to generate different types of village / town / city!

Medieval Fantasy City

Source: Medieval Fantasy City Generator

Blockchain and trusted third parties

As Cory Doctorow points out, merely putting something on a blockchain doesn’t make the data itself ‘trusted’ (or useful!)

In passing, it’s interesting that he cites Vinay Gupta in the piece, as Vinay is someone I’ve historically had a lot of time for. However, Mattereum (NFTs for physical assets) just… seems like a distraction from more important work he’s previously done?

In other words:

if: problem + blockchain = problem - blockchain

then: blockchain = 0

The blockchain hasn’t added anything to the situation, except considerable cost (which could just as easily be spent on direct transfers to poor farmers, assuming you could find someone you trust to hand out the money) and complexity (which creates lots of opportunities for cheating).

Source: The Inevitability of Trusted Third Parties | Cory Doctorow

On hobbies

This was linked to in the latest issue of Dense Discovery with the question of who amongst the readership has taken up a hobby recently?

As Anne Helen Petersen points out, it’s really hard to start a new hobby as an adult, not only for logistical reasons but because of the self-narrative that goes with it.

For me, the gulf between how good I am at something when starting it, and how good I want to be at a thing is often just too off-putting…

To me, that’s what I think a real hobby feels like. Not something you feel like you’re choosing, or scheduling — not a hassle, or something you resent or feel bad about when you don’t do it. Earlier this week, Katie Heaney wrote a piece in The Cut that speaks to what I think a lot of people feel when they think about their hobbies: she keeps trying to start one, but can’t make it stick. The truth is, it’s really really hard to start a hobby as an adult — it feels unnatural, or forced, or performative. We try to force ourselves into hobbies by buying things (see: Amanda Mull’s piece on the “trophies” of the new domesticity) but a Kitchen-Aid will not make you like cooking.

It’s also hard when the messages about what you should be doing with your leisure time are so incredibly contradictory: you should devote yourself to self-care, but also spend more time on your children and partner; you should liberate yourself from the need to monetize your hobby but also have enough money to do the hobby in the first place. This “Smarter Living” piece in the NYT on what to do with a day off is emblematic of just how fucked up our leisure messaging has become: you should “embrace laziness,” “evaluate your career,” “have a family meal,” “fix your finances,” “do that one thing you’ve been putting off,” AND/OR “do nothing,” AND THEN tweet the author about what you did over the weekend!

Source: What a Hobby Feels Like | Anne Helen Petersen

Reducing offensive social media messages by intervening during content-creation

Six per cent isn’t a lot, but perhaps a number of approaches working together can help with this?

The proliferation of harmful and offensive content is a problem that many online platforms face today. One of the most common approaches for moderating offensive content online is via the identification and removal after it has been posted, increasingly assisted by machine learning algorithms. More recently, platforms have begun employing moderation approaches which seek to intervene prior to offensive content being posted. In this paper, we conduct an online randomized controlled experiment on Twitter to evaluate a new intervention that aims to encourage participants to reconsider their offensive content and, ultimately, seeks to reduce the amount of offensive content on the platform. The intervention prompts users who are about to post harmful content with an opportunity to pause and reconsider their Tweet. We find that users in our treatment prompted with this intervention posted 6% fewer offensive Tweets than non-prompted users in our control. This decrease in the creation of offensive content can be attributed not just to the deletion and revision of prompted Tweets -- we also observed a decrease in both the number of offensive Tweets that prompted users create in the future and the number of offensive replies to prompted Tweets. We conclude that interventions allowing users to reconsider their comments can be an effective mechanism for reducing offensive content online.
Source: Reconsidering Tweets: Intervening During Tweet Creation Decreases Offensive Content | arXiv.org

The burnout epidemic

I work an average of about 25 hours per week and I’m tired at the end of it. I can’t even imagine how I coped in my twenties while teaching.

Escalator with man in suit asleep on it

Textile mill workers in Manchester, England, or Lowell, Massachusetts, two centuries ago worked for longer hours than the typical British or American worker today, and they did so in dangerous conditions. They were exhausted, but they did not have the 21st-century psychological condition we call burnout, because they did not believe their work was the path to self-actualization. The ideal that motivates us to work to the point of burnout is the promise that if you work hard, you will live a good life: not just a life of material comfort, but a life of social dignity, moral character and spiritual purpose.

[…]

This promise, however, is mostly false. It’s what the philosopher Plato called a “noble lie”, a myth that justifies the fundamental arrangement of society. Plato taught that if people didn’t believe the lie, then society would fall into chaos. And one particular noble lie gets us to believe in the value of hard work. We labor for our bosses’ profit, but convince ourselves we’re attaining the highest good. We hope the job will deliver on its promise, and hope gets us to put in the extra hours, take on the extra project and live with the lack of a raise or the recognition we need.

Source: Your work is not your god: welcome to the age of the burnout epidemic | The Guardian

Check your perspective

A useful and illustrative story from Sheila Heen, author of Difficult Conversations: How To Discuss What Matters Most, about why it’s useful to understand other people’s context.

Traffic lights

It reminds me, I sometimes tell this story about my eldest son. His name is Ben. He’s 22 now, but when he was about three, we were driving down the street. We stopped at a traffic light, and we were working on both colors and also traffic rules, because at the time we lived on kind of a busy street in Cambridge. So we’re stopped at the light. And I say, “Hey, Ben. What color is the light?” And he says, “It’s green.” I said, “Ben, we’re stopped at the light. What color is the light? Take a good look.” And he goes, “It’s green.” And when it turns, he says, “It’s red. Let’s go.”

Now, the kid seemed bright in most other ways. So I just thought like, what is going on with him? My first hypothesis is maybe he’s color blind, which then that would be my husband’s fault. At least I thought at the time, it’s my husband’s fault. I’ve since been informed it would have been my fault.

So I started collecting data. I’m running a little scientific experiment of my own. So I start asking him to identify red and green in other contexts, and he gets it right every time. And yet every time we come to a traffic light, he’s still giving me opposite answers, because I get a little obsessed with this.

My second hypothesis, by the way, is that he is screwing with me, which I certainly had some data to support. This went on for about three weeks. It wasn’t until maybe three weeks later, and I think my mother-in-law was in town. So I was in the back seat sitting next to Ben, and we stopped at a traffic light. And I suddenly realized that from where he sits in his car seat, he usually can’t see the light in front of us, because the headrest is in the way or it’s above the level of the windshield, windscreen as they say in Europe. So he’s looking out the side window at the cross traffic light.

Now just think about the conversation from his point of view. He’s looking at the light, it’s green; I’m insisting that it’s red, and he’s like, you know, my mother seems right in most other ways, but she’s just wrong about this. The reason that that experience has stuck with me all these years is that it’s such a great illustration of the fact that where you sit determines what you see.

Source: Red Light Green Light | James Sevedge

Productivity dysmorphia

This is a useful term for “the intersection of burnout, imposter syndrome, and anxiety”.

Say you manage a coffee shop. In one day, you placed all the orders with your vendors, cleaned all the machines, launched a new promotional push, scheduled your employees’ shifts for the following month, and responded to every review and email. In this hypothetical scenario, you did great! You got all those tasks done and were attentive to your employees’ needs for time off and fair schedules. So why do you still feel like you didn’t do enough and you’re failing? Productivity dysmorphia.

[…]

Productivity dysmorphia can impact you outside of your job, too. Say you were aiming for a seven-day streak on your Peloton, but you were too tired or had too much work to do on that last day. You might feel like you are a failure for not working out that day, but that just isn’t true. You worked out the six days before that. Missing one goal doesn’t invalidate everything else you’ve done up until that point. We all get overwhelmed and overworked.

Try to reconsider what you think of as “productivity.” It’s productive to get all your work done, yes, and productive to work out or devote a certain amount of time every night to your side job or hobby. It’s also productive to rest. Relaxing and refreshing your mind and body will enable you to accomplish more in the near future without risking the dreaded burnout. Celebrate everything you do as a step toward productivity. Write down your rest periods, too. They count.

Source: How to Overcome ‘Productivity Dysmorphia’ | Lifehacker

Twitter's decline into right-leaning hellsite

I quit Twitter at the start of December. Despite being an early adopter, joining in the same year as my son was born, 15 years later it’s gone from a force for good to a rage machine. I don’t want anything more to do with it.

The study looked at a sample of 4% of all Twitter users who had been exposed to the algorithm (46,470,596 unique users). It also included a control group of 11,617,373 users who had never received any automatically recommended tweets in their feeds.

[…]

The authors analysed the “algorithmic amplification” effect on tweets from 3,634 elected politicians from major political parties in seven countries with a large user base on Twitter: the US, Japan, the UK, France, Spain, Canada and Germany.

Algorithmic amplification refers to the extent to which a tweet is more likely to be seen on a regular Twitter feed (where the algorithm is operating) compared to a feed without automated recommendations.

[…]

The researchers found that in six out of the seven countries (Germany was the exception), the algorithm significantly favoured the amplification of tweets from politically right-leaning sources.

Overall, the amplification trend wasn’t significant among individual politicians from specific parties, but was when they were taken together as a group. The starkest contrasts were seen in Canada (the Liberals’ tweets were amplified 43%, versus those of the Conservatives at 167%) and the UK (Labour’s tweets were amplified 112%, while the Conservatives’ were amplified at 176%).

Source: Twitter’s algorithm favours the political right, a recent study finds | The Conversation

Explaining ideas

This comes at things from a branding/advertising perspective, but I appreciate the focus on clarity of language. After all, clarity of language is clarity of thought.

Ideas are thoughts but not all thoughts are “ideas.” Here’s an example of the use of the word “idea” in an agency setting: “I have an idea — let’s do something with augmented reality or Blockchain or make a special lens.” This isn’t wrong; it’s sloppy.

In the traditional industry sense, “idea” means a novel concept. But when it’s used as in this example, it masks the lack of an actual idea  —  like when someone dumps in the word “strategic” before they say something that’s not strategic. It ups the importance of what comes next. The problem: sometimes this works as a meeting tactic but does not lead to good or clear thinking.

Compare this thought with the use of the word “idea” as a novel concept: “I have an idea  —  I want to create a tool that runners can use to track how far they’ve run and then compete with each other by sharing their achievements via the Internet. They’ll track it via this technology in their shoe which will talk to their computer.”

Source: How to explain an idea: a mega post | Mark Pollard

BBC Archives and the changing of history

On the one hand, I’m glad that the BBC is ensuring that some of its archive material is a bit more in keeping with our (hopefully more enlightened) sensibilities.

However, on the other hand, why do this in secret?

“The sinister fact about literary censorship in England,” Orwell wrote back in 1945, “is that it is largely voluntary.” And so, indeed, it is. Over the weekend, the Daily Telegraph reported that “an anonymous Radio 4 Extra listener” had “discovered the BBC had been quietly editing repeats of shows over the past few years to be more in keeping with social mores.” To which the BBC said . . . well, yeah. In a statement addressing the charge, the institution confirmed that “on occasion we edit some episodes so they’re suitable for broadcast today, including removing racially offensive language and stereotypes from decades ago, as the vast majority of our audience would expect.” Thus, in the absence of law or regulation, has the British establishment begun to excise material it finds inappropriate by today’s lights.

[…]

This raises a host of important questions — chief among which is: Why, if “the vast majority” of the BBC’s audience expects the organization to render its archives more “suitable,” has it been doing so in secret? Again: In the Internet age, changes made to source material tend to be iterative rather than additive. When the New York Times updates a story in its newspaper, one can plausibly obtain both copies. By contrast, when the New York Times updates a story on its website, the original page disappears. By its own admission, the BBC has been deleting entire sketches from comedy series that are 50, 60, or 70 years old, many of which can be heard only with the BBC’s permission. Are we simply to assume that the public supports this development? And, if so, are we permitted to wonder why the BBC was not open about it?

Source: BBC Censors Its Own Archives | National Review

Private schools having charitable status is an absolute scam

I’ve always been against private schooling. I’m glad that others, even those who went to them themselves, are also seeing how bad they are for society.

I hate the new trend of British private schools opening branches abroad because the reason, it seems to me, is naked and unreflecting expansionism. It’s not spreading the original institution’s educational values because, as the Times investigation shows, they’re all too ready to drop those values in order to continue to trade. The desire for revenue obviously plays a part but, as the institutions don’t make profits, I don’t think personal financial rewards for the various executive headteachers or boards of governors are a huge factor. It’s less intelligent than that. It comes from an ill-considered capitalistic urge for growth, nothing more thought through than bigger is better.

This is the same reason McDonald’s opened a branch in Soviet Moscow, but that was fine because, as far as I know, McDonald’s has never applied for charitable status. What is astonishing is how, by conducting themselves in this way, private schools seem to have given up on making a meaningful argument to retain that status themselves. They’ve just stopped caring about the views of the likes of me. Is the right wing of the Conservative party now so completely dominant that the idea of keeping the sympathy of anyone on the left or in the centre feels like a waste of time?

Source: Expansionist private schools need a lesson in morality | David Mitchell | The Guardian

Your attention was stolen

I still find it hard to trust Johann Hari’s writing, but this is more introspective and covers a subject that we all know is an issue: attention.

For me, despite being ‘verified’ on Twitter and having what used to be considered a decent number of followers, I’ve deactivated my account. I think it’s for the last time. I’m so much calmer when not using it.

I realised that to heal my attention, it was not enough simply to strip out distractions. That makes you feel good at first – but then it creates a vacuum where all the noise was. I realised I had to fill the vacuum. To do that, I started to think a lot about an area of psychology I had learned about years before – the science of flow states. Almost everyone reading this will have experienced a flow state at some point. It’s when you are doing something meaningful to you, and you really get into it, and time falls away, and your ego seems to vanish, and you find yourself focusing deeply and effortlessly. Flow is the deepest form of attention human beings can offer. But how do we get there?
Source: Your attention didn’t collapse. It was stolen | The Guardian

Control and responsibility

A massive over-simplification, but then that’s the point of 2x2 grids. Of course, everyone wants to think they’re in the top-right corner…

In many situations, we have the freedom to choose. We can choose a quadrant or we can choose not to participate. And if we’re lucky or care enough, we can choose who to vote for, who to work for and where we’re headed.
Source: The control/responsibility matrix | Seth's Blog

Spatial Finance

Using real-time satellite imagery to ensure that people are building (or not-building) what they say they’re going to.

‘Spatial finance’ is the integration of geospatial data and analysis into financial theory and practice. Earth observation and remote sensing combined with machine learning have the potential to transform the availability of information in our financial system. It will allow financial markets to better measure and manage climate-related risks, as well as a vast range of other factors that affect risk and return in different asset classes.
Source: Spatial Finance Initiative - Greening Finance and Investment

Health surveillance

It’s possible to be entirely in favour mass vaccination (as I am) while also concerned about the over-reach of states with our personal health data.

As this article discusses, based on a report from an German non-profit called AlgorithmWatch, such health surveillance is being normalised due to the requirements of responding to a global pandemic.

The idea that technology can be used to solve complex social issues, including public health, is not a new one. But the pandemic strongly influenced how technology is applied, with much of the push coming from public health policymaking and public perceptions, said the report.

The report also highlighted the growing divide between people who fervently defend the schemes and those who staunchly oppose them - and how fear and misinformation have influenced both sides.

Source: Pandemic Exploited To Normalise Mass Surveillance? | The ASEAN Post

NFTs, financialisation, and crypto grifters

At over two hours long, I’m still only half-way through this video but I can already highly recommend it. There’s some technical language, as befits the nature of what’s discussed, but I really appreciate it going right back to the financial crisis to explain what’s going on.

[embed]www.youtube.com/watch

Source: The Problem With NFTs | YouTube

Tether and crypto price manipulation

You’d expect Jacobin to be against crypto, but this is the first level-headed explanation of the ‘Tether controversy’ I’ve seen.

There is no conceivable universe in which cryptocurrency exchanges should need an exponentially expanding supply of stablecoins to facilitate daily trading. The explosion in stablecoins and the suspicious timing of market buys outlined in the 2017 paper suggest — as a 2019 class-action lawsuit alleges — that iFinex, the parent company of Tether and Bitfinex, is printing tethers from thin air and using them to buy up Bitcoin and other cryptocurrencies in order to create artificial scarcity and drive prices higher.

Tether has effectively become the central bank of crypto. Like central banks, they ensure liquidity in the market and even engage in quantitative easing — the practice of central banks buying up financial assets in order to stimulate the economy and stabilize financial markets. The difference is that central banks, at least in theory, operate in the public good and try to maintain healthy levels of inflation that encourage capital investment. By comparison, private companies issuing stablecoins are indiscriminately inflating cryptocurrency prices so that they can be dumped on unsuspecting investors.

This renders cryptocurrency not merely a bad investment or speculative bubble but something more akin to a decentralized Ponzi scheme. New investors are being lured in under the pretense that speculation is driving prices when market manipulation is doing the heavy lifting.

This can’t go on forever. Unbacked stablecoins can and are being used to inflate the “spot price” — the latest trading price — of cryptocurrencies to levels totally disconnected from reality. But the electricity costs of running and securing blockchains is very real. If cryptocurrency markets cannot keep luring in enough new money to cover the growing costs of mining, the scheme will become unworkable and financially insolvent.

No one knows exactly how this would shake out, but we know that investors will never be able to realize the gains they have made on paper. The cryptocurrency market’s oft-touted $2 trillion market cap, calculated by multiplying existing coins by the latest spot price, is a meaningless figure. Nowhere near that much has actually been invested into cryptocurrencies, and nowhere near that much will ever come out of them.

Source: Cryptocurrency Is a Giant Ponzi Scheme | Jacobin

Co-ops and DAOs

Handy article, especially for those deep in the ‘capitalist realism’ (or neoliberalism) that the author describes.

Although co-ops and DAOS are both collectively owned and co-determined organizational forms, there are some key differences. Primarily, cooperatives have one-member, one-vote governance. This means that people vote, not dollars. No single member of a cooperative can purchase more power than anyone else.

While it is possible for DAOs to emulate cooperative governance, it’s more common to observe the easier-to-implement governance pattern of one-token, one-vote, since verifying one’s personhood is still a nascent field in the world of blockchain.

[…]

From my experiences in the two spaces, I have noticed that DAOs tend to be better at enabling collective ownership at scale, even if their cultural understanding of the rights, responsibilities, and accountability associated with ownership is comparatively underdeveloped. And while cooperatives tend to be less successful in securing funding, they are also more likely, through their sober rejection of capitalist realism, to correctly address the root causes of inequity. Below, I’ll share some of the key takeaways I have gleaned about what DAOs and co-ops can learn from each other.

Source: What Co-ops and DAOs Can Learn From Each Other

Hype levels

Handy. I do like typologies and scales.

Today‘s tech industry is obsessed with the big futures. The metaverses, the next internets — you name it. Hype is everywhere, oozing out of the headlines of news articles, growing like mold all over my LinkedIn feed, and blinking at me whenever I open my inbox.

But hype is not always the same; there are different forms and levels. I‘ve been trying my hand on a categorization based on my experience and my understanding of the phenomenon. This categorization is intended to help people better understand which form of hype they‘re confronted with.

[…]

Think of this scale as form of Richter scale to get a feel of how bad the hype is. A new technology doesn’t have to move through every single level but it most likely will at least reach level 3.

Source: The five Levels of Hype | Johannes Klingebiel

Individualism and collectivism in decentralised networks

I don’t agree with Paul Frazee’s point in this post about Twitter vs “p2p Twitters” (by which he means the Fediverse) but otherwise he makes good points about governance and what he calls “operational collectivism”.

There are two kinds of resources in a network:
  • Individualist. The resource is owned by one stakeholder and doesn’t require cross-party coordination. Examples might include: tweets, blogposts, personal websites, likes and comments.
  • Collectivist. The resource is owned by multiple stakeholders¹ and needs coordination between them. Examples might include: naming registries, package managers, cryptocurrency account balances, aggregated comment threads.
I start the conversation here because it sets the context for all decentralization: that we have mastered individualist operation and collectivist standardization but have failed at collectivist operation. The inability to collectively operate networks has created the conditions for large monopolies on the Internet.

[…]

Whoever operates a collective resource has the power to change its implementation. The reason we decentralize operation is to distribute that power of implementation. If the stakeholders have the power of implementation, they’re able to ensure the resource represents their interests.

[…]

What’s the point of decentralization? It’s to ensure that the stakeholders — the end users — are represented. Individualism enforces personal control, while decentralized collectivism produces an intransigent consensus. We limit collectivist systems because they’re powerful systems, and power must always be checked.

Source: Back to basics: What is the point of decentralization? | Paul Frazee

Web3 and Ed3 are both problematic

Web3 is being discussed as if it’s anything other than the financialisation of everything. This post about ‘Ed3’ really struggles to square that circle when it comes to education. There are so many issues with it that I don’t really know where to start.

The bit that really jumped out for me, though, given that I’ve spent a decade working on Open Badges is the bit on credentialing. The cat is out of the bag by this point, especially in the “only paying for what you need” language. The whole point of education is that you don’t know what kind of person you’ll be at the end of it.

Anything else is just training.

Imagine if universities were fractionalized and you could earn the micro-credentials that mattered most for your career, only paid for what you needed, and owned a life-long portfolio with those credentials that were interoperable across all institutes & industries?

Web3 will also enable the metaverse to take shape over the next few decades; a universe of many buildable worlds that operate on decentralized infrastructure. The metaverse will make it possible to do everything we can do in the real world but enhanced by digital experiences & possible in an entirely virtual world.

Source: From Web3 to Ed3 - Reimagining Education in a Decentralized Worl… — Mirror

Is QWERTY a really bad keyboard layout?

I’ve been able to touch-type since I was about 12 years of age, thanks to Mavis Beacon Teaches Typing. Like most people, I use the QWERTY layout, but I’ve always been curious about other layouts.

Apparently, it’s a bit of a myth that QWERTY was designed to slow typists down in case the mechanical keys got stuck. In the last edition of the Ultimate Typing Championship, all but one of the 26 competitors used QWERTY (and the one using the Dvorak layout came 12th).

The main thing to consider, in my opinion, is comfort. I remember being shocked once when I bought a keyboard and it came with a large warning that the use of any keyboard and mouse can cause ‘serious’ injuries.

This article talks about RSI (Repetitive Strain Injury), CTS (Carpal Tunnel Syndrome) and CTP (Carpal Tunnel Pressure).

If keyboard use does carry the risk of developing RSI, what is it about the keyboard that’s bad? Is it the physical design, the key layout, hand/wrist posture, or something else? My impression is that key layout is a relatively small component here, for several reasons. The first is my own experience, according to which it’s much more important to use a split keyboard, say, than the appropriate layout if I want to avoid RSI flare-ups. The second is that CTS is largely caused by CTP, which in theory seems more impacted by physical design (chiefly whether a keyboard is split and/or tilted/tented) and less by finger stretching or the horizontal rotating we do with our hands to reach keys at the sides of the keyboard. The third is Carpalx’s model, which suggests that established alternatives like Dvorak and Colemak, while better on the whole, use the pinky more heavily than does QWERTY – maybe it is a little bit bad to reach for the outermost keys, but any layout will have some keys at the extremes, so perhaps the difference between layouts just isn’t that great.

What about QWERTY specifically? I wasn’t really able to find any research on this. Maybe that’s because it’s very hard to design experiments to test it? You can’t just take a bunch of people and ask half of them to start using Dvorak, because there’s a significant learning curve involved. But you don’t want to find out if learning a new layout is good, you want to find out using it is good once you have learned it. There is no natural control group for these experiments, and no obvious placebo.

In sum, keyboard use in general does seem to cause RSI, but the risk seems fairly small. Bad key layouts may only be a minor part of the RSI risk, though QWERTY does seem worse than most alternatives, relatively speaking. The evidence here is weak and my confidence intervals are wide.

Source: How Bad Is QWERTY, Really? A Review of the Literature, such as It Is | Erich Grunewald

A low-tech solution for personal warmth

My family, especially the female members, have always been big fans of the hot water bottle. So much so, in fact, that one of my wife’s favourite presents was receiving a long snake-shaped hot water bottle that she can use in various configurations.

As we face a bit of an energy crisis, hot water bottles are definitely something more people should be using, as this article explains.

A hot water bottle is a sealable container filled with hot water, often enclosed in a textile cover, which is directly placed against a part of the body for thermal comfort. The hot water bottle is still a common household item in some places – such as the UK and Japan – but it is largely forgotten or disregarded in most of the industrialised world. If people know of it, they usually associate it with pain relief rather than thermal comfort, or they consider its use an outdated practice for the poor and the elderly.

As early as the 1500s, people started to use all kinds of portable containers filled with hot coals from the fire. These were used as foot warmers, hand warmers, and bed warmers. Most were made of metal, either brass or copper, and placed inside wooden or ceramic enclosures to prevent skin burns. Over time, hot coals were replaced by hot water, which is a cleaner and safer heat storage medium.

Initially, these first “real” hot water bottles were made from hard materials such as glass, metal, or stoneware. It was only with the invention of vulcanised rubber in the nineteenth century that more comfortable lightweight and flexible hot water bottles became an option. Spanish friends told me that hot water bottles used to be made from animal skins, but I could not verify this. It may well be true, because all over the world there’s a long tradition of using “water skins” for storing liquids.

Source: The Revenge of the Hot Water Bottle | LOW←TECH MAGAZINE

Kids need life on the highest volume

This article is based on the author’s experiences as a teacher in state schools in the US. I should imagine the situation is exacerbated there, but it can’t be that great elsewhere, either.

My own kids seem like they’re OK. Our youngest, whose had Covid like me this week, has gone back to remote learning, which she enjoys as she completes her work quickly and then does other things. I think it’s particularly hard on teenagers, like our eldest, who are preparing for important exams.

The data about learning loss and the mental health crisis is devastating. Overlooked has been the deep shame young people feel: Our students were taught to think of their schools as hubs for infection and themselves as vectors of disease. This has fundamentally altered their understanding of themselves.

When we finally got back into the classroom in September 2020, I was optimistic, even as we would go remote for weeks, sometimes months, whenever case numbers would rise. But things never returned to normal.

When we were physically in school, it felt like there was no longer life in the building. Maybe it was the masks that made it so no one wanted to engage in lessons, or even talk about how they spent their weekend. But it felt cold and soulless. My students weren’t allowed to gather in the halls or chat between classes. They still aren’t. Sporting events, clubs and graduation were all cancelled. These may sound like small things, but these losses were a huge deal to the students. These are rites of passages that can’t be made up.

[…]

They are anxious and depressed. Previously outgoing students are now terrified at the prospect of being singled out to stand in front of the class and speak. And many of my students seem to have found comfort behind their masks. They feel exposed when their peers can see their whole face.

[…]

At the beginning of the pandemic, adults shamed kids for wanting to play at the park or hang out with their friends. We kept hearing, “They’ll be fine. They’re resilient.” It’s true that humans, by nature, are very resilient. But they also break. And my students are breaking. Some have already broken.

When we look at the Covid-19 pandemic through the lens of history, I believe it will be clear that we betrayed our children. The risks of this pandemic were never to them, but they were forced to carry the burden of it. It’s enough. It’s time for a return to normal life and put an end to the bureaucratic policies that aren’t making society safer, but are sacrificing our children’s mental, emotional, and physical health.

Our children need life on the highest volume. And they need it now.

Source: I’m a Public School Teacher. The Kids Aren’t Alright. | Common Sense

Paying for everything twice

As someone who’s recently started using a budgeting app, and who has a lot of music-making equipment lying around unused, I concur.

One financial lesson they should teach in school is that most of the things we buy have to be paid for twice.

There’s the first price, usually paid in dollars, just to gain possession of the desired thing, whatever it is: a book, a budgeting app, a unicycle, a bundle of kale.

But then, in order to make use of the thing, you must also pay a second price. This is the effort and initiative required to gain its benefits, and it can be much higher than the first price.

A new novel, for example, might require twenty dollars for its first price—and ten hours of dedicated reading time for its second. Only once the second price is being paid do you see any return on the first one. Paying only the first price is about the same as throwing money in the garbage.

Likewise, after buying the budgeting app, you have to set it all up, and learn to use it habitually before it actually improves your financial life. With the unicycle, you have to endure the presumably painful beginner phase before you can cruise down the street. The kale must be de-veined, chopped, steamed, and chewed before it gives you any nourishment.

If you look around your home, you might notice many possessions for which you’ve paid the first price but not the second. Unused memberships, unread books, unplayed games, unknitted yarns.

Source: Everything Must Be Paid for Twice | Raptitude

Ancient cynicism

As with stoicism, we’ve lost the ancient meaning of the word ‘cynicism’. I think you can probably tell a lot about how much love I have for Diogenes given that I named my phone after him (I name all my devices so I can easily identify them on wifi networks, etc.)

Image of half-full and half-empty cups
The original cynicism was a philosophical movement likely founded by Antisthenes, a student of Socrates, and popularized by Diogenes of Sinope around the fifth century B.C. It was based on a refusal to accept the assumptions and habits that discourage people from questioning conventional dogmas, and thus hold us back from the search for deep wisdom and happiness. Whereas a modern cynic might say, for instance, that the president is an idiot and thus his policies aren’t worth considering, the ancient cynic would examine each policy impartially.

The modern cynic rejects things out of hand (“This is stupid”), while the ancient cynic simply withholds judgment (“This may be right or wrong”).

[…]

To pivot from the modern to the ancient, I recommend focusing each day on several original cynical concepts, none of which condemns the world but all of which lead us to question, and in many cases reject, worldly conventions and practices.

  1. Eudaimonia ("satisfaction")
  2. Askesis ("discipline")
  3. Autarkeia ("self-sufficiency")
  4. Kosmopolites ("cosmopolitanism")
Source: We’ve Lost the True Meaning of Cynicism | The Atlantic

E2EE is for everyone

Not only has the current UK government underfunded the NHS since coming to power in an attempt to introduce market-based medicine, orchestrated the unprecedented national self-sabotage that is Brexit, and attacked the BBC, but they’re also trying to convince the British public that end-to-end encryption (E2EE) is only wanted by paedophiles.

The hypocrisy of it knows no bounds. These are the same politicians who rely on the E2EE of WhatsApp, Signal, and other messaging services to plot against one another and society in general.

Critics sometimes claim that encryption makes it impossible to subpoena or obtain a warrant for information from people’s phones — this is bizarre because governments already demand such data. What they are actually complaining about is that the “platform” — for instance Facebook — no longer wants to be able to see the content themselves. The warrant will have to be served upon the device owner, not upon the (social) network provider.

Good security demands that data that we share amongst family and friends should remain available only to those family and friends; and likewise that data which we share with businesses should remain only with those businesses, and should only be used for agreed business purposes.

Network providers — and, importantly, messaging-network and social-network providers — are helping their users obtain better data security by cutting themselves off from the ability to access plaintext content. Simply: they don’t need to see it, and it’s not their job to police or censor it. Their adoption of end-to-end encryption makes everyone’s data safer.

The world needs end-to-end encryption. It needs more of it. We need the privacy, agency, and control over data that end-to-end encryption enables. And encryption is needed everywhere and by everyone — not just by politicians and police forces.

Source: Why we need #EndToEndEncryption and why it’s essential for our safety, our children’s safety, and for everyone’s future #noplacetohide | dropsafe

The life-changing difference of an internet connection

As someone who’s seemingly around the same age as the author of this post, I agree that the internet has made my life better. I didn’t have it anywhere near as hard as them while growing up, but my online connections (and research) have certainly helped me escape into a different life.

This is part of the story of how the internet changed my life for the better. I’m an early millennial and I was raised online. Through the internet, I found friends, support, and the human connection that I was lacking in real life. I also found valuable information that helped me help myself and sometimes help others. The key with information is always to effectively filter the good from the bad, which is a genuine life skill unto itself. My life today isn’t perfect, but it’s better than it’s ever been. My message to all the people out there who are struggling is to believe in yourself. If you help yourself and you let others help you, things are never hopeless.
Source: The Internet Changed My Life | Pointers Gone Wild

Abusing AI girlfriends

I don’t often share this kind of thing because I find it distressing. We shouldn’t be surprised, though, that the kind of people who physically, sexually, and emotionally abuse other humans beings also do so in virtual worlds, too.

In general, chatbot abuse is disconcerting, both for the people who experience distress from it and the people who carry it out. It’s also an increasingly pertinent ethical dilemma as relationships between humans and bots become more widespread — after all, most people have used a virtual assistant at least once.

On the one hand, users who flex their darkest impulses on chatbots could have those worst behaviors reinforced, building unhealthy habits for relationships with actual humans. On the other hand, being able to talk to or take one’s anger out on an unfeeling digital entity could be cathartic.

But it’s worth noting that chatbot abuse often has a gendered component. Although not exclusively, it seems that it’s often men creating a digital girlfriend, only to then punish her with words and simulated aggression. These users’ violence, even when carried out on a cluster of code, reflect the reality of domestic violence against women.

Source: Men Are Creating AI Girlfriends and Then Verbally Abusing Them | Futurism

Pix and digital payments in Brazil

I came across this story via Benedict Evans' newsletter (it’s not the kind of thing I’d usually track). What I find interesting is this is a hugely successful rollout of a digital payments system done by a central bank. It’s helping real people, including those in poverty.

Meanwhile, crypto tokens are held by crypto bros and middle-class white guys like myself trying to make a quick buck. Just goes to show that innovation doesn’t always come from where you expect.

 

Pix, rolled out by the Banco Central do Brasil in Nov. 2020, was built for efficiency and financial inclusion. It now has 107.5 million registered accounts, more than half of the country’s population. One year after implementation, more than half a trillion Brazilian reais were transacted through the low-cost payments system last month. According to central bank data, Pix payments volume is already equivalent to 80% of debit and credit card transactions.

[…]

“Except for very particular transactions, market penetration tends to 99% on all individual transfers,” [Julian Colombo, CEO of banking technology firm N5] added. However, the rollout has not been without hiccups, including kidnapping.

[…]

On a recent Sunday in Rio de Janeiro, a three-member samba band played for a crowded restaurant. At the end, they passed around the tambourine to collect money. One diner apologized, saying he did not have any cash on him. The drummer said, “No problem, I take Pix,” and proceeded to share his code — which can be an email, phone number or other easy-to-remember code — with the diner, who promptly transferred the money his way.

Source: Pix breaks ground in Brazil, shakes up payments market | S&P Global Market Intelligence

Nine planetary boundaries

This is a useful diagram to share in order to demonstrate that we might think we’re shafted with regards to climate change, but that pales into insignificance compared to pollution from chemicals and plastics.

Nine planetary boundaries
The researchers say there are many ways that chemicals and plastics have negative effects on planetary health, from mining, fracking and drilling to extract raw materials to production and waste management.

“Some of these pollutants can be found globally, from the Arctic to Antarctica, and can be extremely persistent. We have overwhelming evidence of negative impacts on Earth systems, including biodiversity and biogeochemical cycles,” says Carney Almroth.

Global production and consumption of novel entities is set to continue to grow. The total mass of plastics on the planet is now over twice the mass of all living mammals, and roughly 80% of all plastics ever produced remain in the environment.

Plastics contain over 10,000 other chemicals, so their environmental degradation creates new combinations of materials – and unprecedented environmental hazards. Production of plastics is set to increase and predictions indicate that the release of plastic pollution to the environment will rise too, despite huge efforts in many countries to reduce waste.

Source: Safe planetary boundary for pollutants, including plastics, exceeded, say researchers | Stockholm Resilience Centre

Optimism about the future

I don’t have a particularly strong interest in sci-fi, nor do I have access to all of this paywalled post. However, I don’t need either to share a couple of insights.

First, I agree that the utopia/dystopia distinction are two sides of the same coin depending on what your view of what constitutes a flourishing human life. You don’t need to look far in our current situation to see that in action.

Second, while I’d probably broadly agree with the three conditions for optimism the author lays out, you could technically argue against all of them.

One possibility is that utopia and dystopia are just whatever the author decides to present as such. Take a war-torn unequal Malthusian future and add some soaring music and graphics of cities lighting up, and maybe audiences will see it as utopian. Or take a serene, pastel-colored post-scarcity hippie society and add a shirtless Sean Connery shouting that it’s all an illusion, and maybe it starts to seem like a creepy dystopia. (Of course, if this is what’s going on, there will be a tendency toward presenting any future as dystopian, since stories need external conflict; the world has to be “messed up” in some way in order for the protagonists to “fix” it.)

But in fact I submit that in order to be truly optimistic, a sci-fi world needs more than just a stirring theme song. It needs to present a future with several concrete features corresponding to the type of future people want to imagine actually living in. The “Wang Standard” is a good start, with its emphasis on the power of human effort, but in the end it relies on the somewhat circular notion of a “radically better” future. What does it mean for the future to be better? I submit that for a future to feel optimistic, it should feature the following elements:

  • Material abundance
  • Egalitarianism — broadly shared prosperity, relatively moderate status differences, and broad political participation
  • Human agency — the ability of human effort to alter the conditions of the world
Source: What makes an "optimistic" vision of the future? | Noahpinion

Reading is useless

I like this post by graduate student Beck Tench. Reading is useless, she says, in the same way that meditation is useless. It’s for its own sake, not for something else.

When I titled this post “reading is useless,” I was referring to a Zen saying that goes, “Meditation is useless.” It means that you meditate to meditate, not to use it for something. And like the saying, I’m being provocative. Of course reading is not useless. We read in useful ways all the time and for good reason. Reading expands our horizons, it helps us understand things, it complicates, it validates, it clarifies. There’s nothing wrong with reading (or meditating for that matter) with a goal in mind, but maybe there is something wrong if we feel we can’t read unless it’s good for something.

This quarter’s experiment was an effort to allow myself space to “read to read,” nothing more and certainly nothing less. With more time and fewer expectations, I realized that so much happens while I read, the most important of which are the moments and hours of my life. I am smelling, hearing, seeing, feeling, even tasting. What I read takes up place in my thoughts, yes, and also in my heart and bones. My body, which includes my brain, reads along with me and holds the ideas I encounter.

This suggests to me that reading isn’t just about knowing in an intellectual way, it’s also about holding what I read. The things I read this quarter were held by my body, my dreams, my conversations with others, my drawings and journal entries. I mean holding in an active way, like holding something in your hands in front of you. It takes endurance and patience to actively hold something for very long. As scholars, we need to cultivate patience and endurance for what we read. We need to hold it without doing something with it right away, without having to know.

Source: Reading is Useless: A 10-Week Experiment in Contemplative Reading | Beck Tench

The cost of a thing is the amount of life which is required to be exchanged for it

This article in The Atlantic by Alan Lightman points out how biophilic we have been historically as a species, and how that’s changed only recently.

None of this, of course, helps with the climate emergency and the concomitant biodiversity collapse. I read the WEF Global Risks Report for 2022 and, well, I’ve read more hopeful documents.

Distorted image of nature (by Nico Krijno
Most of the minutes and hours of each day we spend in temperature-controlled structures of wood, concrete, and steel. With all of its success, our technology has greatly diminished our direct experience with nature. We live mediated lives. We have created a natureless world.

It was not always this way. For more than 99 percent of our history as humans, we lived close to nature. We lived in the open. The first house with a roof appeared only 5,000 years ago. Television less than a century ago. Internet-connected phones only about 30 years ago. Over the large majority of our 2-million-year evolutionary history, Darwinian forces molded our brains to find kinship with nature, what the biologist E. O. Wilson called “biophilia.” That kinship had survival benefit. Habitat selection, foraging for food, reading the signs of upcoming storms all would have favored a deep affinity with nature. Social psychologists have documented that such sensitivities are still present in our psyches today. Further psychological and physiological studies have shown that more time spent in nature increases happiness and well-being; less time increases stress and anxiety. Thus, there is a profound disconnect between the natureless environment we have created and the “natural” affections of our minds. In effect, we live in two worlds: a world in close contact with nature, buried deep in our ancestral brains, and a natureless world of the digital screen and constructed environment, fashioned from our technology and intellectual achievements. We are at war with our ancestral selves. The cost of this war is only now becoming apparent.

[…]

I am not so naive as to think that the careening technologization of the modern world will stop or even slow down. But I do think that we need to be more mindful of what this technology has cost us and the vital importance of direct experiences with nature. And by “cost,” I mean what Henry David Thoreau meant in Walden: “The cost of a thing is the amount of what I will call life which is required to be exchanged for it, immediately or in the long run.” The new technology in Thoreau’s day was the railroad, which he feared was overtaking life. Thoreau’s concern was updated by the literary critic and historian of technology Leo Marx in his 1964 book, The Machine in the Garden. That book describes the way in which pastoral life in America was interrupted by the technology and industrialization of the 19th and 20th centuries. Marx could not have imagined the internet and the smartphone, which arrived only a few decades later. And now I worry about the promise of an all-encompassing virtual world called the “metaverse,” and the Silicon Valley arms race to build it.  Again, it is not the technology itself that should concern us. It is how we use that technology, in balance with the rest of our lives.

Source: This Is No Way to Be Human - The Atlantic

Matching work activities to mind modes

This by Jakob Greenfeld reminds me of Buster Benson’s evergreen post Live like a hydra — especially the sub-section ‘Seven modes (for seven heads)’.

Of course, you can’t always be driven by what mood you happen to be in. Sometimes, you have to change things up to ensure that your mood changes. But hey, all bets are off during a pandemic, right?

I recently discovered a simple step-by-step process that significantly increased my personal productivity and made me happier along the way.

It costs $0 and no, it’s not some note-taking or to-do list system.

In short:

Step 1: develop meta-awareness of your state of mind.

Step 2: pattern-match to identify your mind’s most common modes.

Step 3: learn to pick activities that match each mode.

Source: Effortless personal productivity (or how I learned to love my monkey mind) – Jakob Greenfeld 

Does Not Translate 

I enjoyed some of these untranslatable words from languages other than English.

Sturmfrei (German) When all the people you live with are gone for a while and you have the whole place to yourself.

Gyakugire (Japanese) Getting mad at somebody because they got mad at you for something you did.

Bear favour (Swedish) To do something for someone with good intentions, but it actually has negative consequences instead.

Source: Does Not Translate – Words that don’t translate to other languages

How to be useless

I love articles that give us a different lens for looking at the world, and this one certainly does that. It also provides links for further reading, which I very much appreciate.

Zhuangzi argued that we can reclaim our lives, and be happier and more fulfilled, if we become more useless. In this, he went against many influential thinkers of his time, such as the Mohists. These followers of Master Mo (c470-391 BCE) prized efficiency and welfare above all. They insisted on cutting away all ‘useless’ parts of life – art, luxury, ritual, culture, leisure, even the expression of emotions – and instead focused on ensuring that people across the social classes receive essential material resources. The Mohists viewed many practices common at the time as immorally wasteful. Rather than a funeral rich with rituals following tradition, such as burial within three layers of coffins and a years-long mourning period, Mohists recommended simply digging a pit deep enough so the body doesn’t smell. You were permitted to cry on your way to and from the burial site, but then you needed to return to work and life.

Although the Mohists wrote more than 2,000 years ago, their ideas sound familiar to modern ears. We frequently hear how we should avoid supposedly useless things, such as pursuing the arts, or a humanities education (see the all-too-frequent slashing of liberal arts budgets at universities). Or it’s often said that we should allow for these things only insofar as they benefit the economy or human welfare. You might have felt this discomfort in your own life: the pressure from the meritocracy to serve some purpose, have some benefit, maximise some utility – that everything you do should be, in some sense, useful.

However, as we will show here, Zhuangzi offers an essential antidote to this pernicious means-ends way of thinking. He demonstrates that you can improve your life if you let go of the anxiety of wanting to serve a purpose. To be sure, Zhuangzi doesn’t altogether spurn usefulness. Rather, he argues that usefulness itself should not be life’s bottom line.

Source: How to be useless | Psyche Guides

Your accusations are your confessions

I didn’t know Stephen Downes had a political blog. These are his thoughts on cancel culture which, like most of what he says in general, I agree with.

Every time a conservative complains about censorship or ‘cancel culture’ we need to remind ourselves, and to say to them,

“You are the one complaining about cancel culture because you are the one who uses silencing and suppression as political tools to advance your own interests and maintain your own power.

“You are complaining about cancel culture because the people you have always silenced are beginning to have a voice, and they are beginning to say, we won’t be silent any more.

“And when you say the people working against racism and misogyny and oppression are silencing you, that tells us exactly who – and what – you are.”

“Your accusations are your confessions.”

Source: Cancelled | Leftish

Web3, the metaverse, and the DRM-isation of everything

I’ve been reading a report entitled Crypto Theses for 2022 recently. Despite having some small investments in crypto, the world that’s painted in that report is, quite frankly, dystopian.

The author of that report admits to being on the right of politics and, to my mind, this is the problem: we’ve got people who believe that societal control and monetisation of all of the things in a free market economy is desirable.

This article focuses on Mark Zuckerberg’s announcement at the end of 2021 about the ‘metaverse’. This is something which is a goal of the awkwardly-titled ‘web3’ movement.

Perhaps I’m getting old, but to me technology should be about enabling humans to do new things or existing things better. As far as I can see, crypto/web3 just adds a DRM and monetisation layer on top of the open web?

In one sense, it's a vision of a future world that takes many long-existing concepts, like shared online worlds and digital avatars, and combines them with recently emerging trends, like digital art ownership through NFT technology and digital "tipping" for creators.

In another sense, it’s a vision that takes our existing reality — where you can already hang out in 2D or 3D virtual chat rooms with friends who are or are not using VR headsets — and tacks on more opportunities for monetization and advertising.

Source: Zuckerberg Convinced the Tech World That ‘the Metaverse’ Is the Future | Business Insider

America, fascism, and the first, second, and third 'solutions'

Jason Kottke reminds us of Toni Morrison’s “Ten Steps Towards Fascism” from 1995. As an historian, it was this bit that he also quoted that jumped out at me, though.

Let us be reminded that before there is a final solution, there must be a first solution, a second one, even a third. The move toward a final solution is not a jump. It takes one step, then another, then another.
To outsiders, Americans at this point seem like slowly-boiled frogs on their way to a fascist stew. Canadians seemingly understand the threat.

It’s terrifying when you think about it too much. (Most people in a position to do anything about it seemingly aren’t thinking about it…)

Source: Toni Morrison’s Ten Steps Towards Fascism

Persistent Practices and Pragmatism

I think Albert Wenger has discovered, however obliquely, Pragmatism. Once you realise that the correspondence theory of truth is nonsense, and that it makes more sense to think about truth as being “good in the way of belief” the world starts making a lot more sense…

Yoga works. Meditation works. Conscious breathing works. By “works” I mean that these practices have positive effects for people who observe them. They can help build and retain strength and flexibility of both body and mind. The fact that they work shouldn’t be entirely surprising, given that these practices have been developed over thousands of years through trial and error by millions of people. The persistence of these practices by itself provides devidence of their effectiveness.

But does that mean the theories frequently cited to explain these practices are also valid? Do chakras and energy flows exist? I don’t want to rule this out – there have been various attempts to map chakras to the nervous and endocrine systems – but I think it is much more likely that these are pre-scientific explanations not unlike the phlogiston theory of combustion. I will refer to these as “internal theories,” meaning the theories that are generally associated with the practices historically.

Source: A Short Note on Persistent Practices | Continuations

Meetings and work theatre

The way that you do something is almost as important as what you do. However, I’ve definitely noticed that, during the pandemic as people get used to working remotely (as I’ve done for a decade now) there’s definitely been some, let’s say, ‘theatre’ added to it all.

Meetings, the office’s answer to the theatre, have proliferated. They are harder to avoid now that invitations must be responded to and diaries are public. Even if you don’t say anything, cameras make meetings into a miming performance: an attentive expression and occasional nodding now count as a form of work. The chat function is a new way to project yourself. Satya Nadella, the boss of Microsoft, says that comments in chat help him to meet colleagues he would not otherwise hear from. Maybe so, but that is an irresistible incentive to pose questions that do not need answering and offer observations that are not worth making.

Shared documents and messaging channels are also playgrounds of performativity. Colleagues can leave public comments in documents, and in the process notify their authors that something approximating work has been done. They can start new channels and invite anyone in; when no one uses them, they can archive them again and appear efficient. By assigning tasks to people or tagging them in a conversation, they can cast long shadows of faux-industriousness. It is telling that one recent research study found that members of high-performing teams are more likely to speak to each other on the phone, the very opposite of public communication.

Performative celebration is another hallmark of the pandemic. Once one person has reacted to a message with a clapping emoji, others are likely to join in until a virtual ovation is under way. At least emojis are fun. The arrival of a round-robin email announcing a promotion is as welcome as a rifle shot in an avalanche zone. Someone responds with congratulations, and then another recipient adds their own well wishes. As more people pile in, pressure builds on the non-responders to reply as well. Within minutes colleagues are telling someone they have never met in person how richly they deserve their new job.

Source: The rise of performative work | The Economist

Vaccine Hesitancy as part of a Plague Anthology 

I’m not sure who’s behind this website, but it looks good. I appreciated the historical context behind vaccine hesitancy in cultures other than my own provided in the most recent post.

Anti-vaxxers adjacent to conspiracy theorists are nuts, but there’s definitely a communications angle to ensuring the effective roll-out of life-saving vaccines.

In Egypt, around 1800, there are reports of 60 000 deaths each year. The Ottoman ruler, Muhammad Ali Pasha, began in 1819 to institute a plan for general vaccinations and the logical people to carry this out were the barber-surgeons, known and trusted by the locals. While the Bedouin had long been enthusiastic about protecting their children in this way, the fellahin (peasantry) was reluctant, largely because they did not trust the government and thought it was a way of “marking” their children for conscription. Religious objections and concerns about mixing Muslim and Christian blood also played their part, and attempts to bribe the vaccinators were not uncommon.

After the serious epidemic of 1836, official efforts intensified, with barber-vaccinators being trained and records kept. Gradually, the message got through and by 1850, the decline in child mortality was affecting the population statistics. The following anecdote, describes a perhaps surprising pocket of vaccine hesitancy.

Source: Vaccine Hesitancy – Egypt 1866 | Plague Anthology

Let's Settle This

This is good fun and, in fact, Laura and I used it to structure the upcoming Season 3 trailer for our podcast.

It's time to settle the endless internet debates.
Source: Let's Settle This

Signal's CEO on 'web3'

My first response to most new technological things is usually “cool, I wonder how I/we could use that?” With so-called ‘web3’, though, I’ve kind of thought it was bullshit.

This post by Moxie Marlinspike, CEO of Signal, goes a step forward and includes opinions by someone who actually knows what they’re talking about.

I’m not sure what I think about the bit quoted below about not distributing infrastructure? In Marxist terms, it seems like not distributing or providing ownership of the means of production?

If we do want to change our relationship to technology, I think we’d have to do it intentionally. My basic thoughts are roughly:
  1. We should accept the premise that people will not run their own servers by designing systems that can distribute trust without having to distribute infrastructure. This means architecture that anticipates and accepts the inevitable outcome of relatively centralized client/server relationships, but uses cryptography (rather than infrastructure) to distribute trust. One of the surprising things to me about web3, despite being built on “crypto,” is how little cryptography seems to be involved!
  2. We should try to reduce the burden of building software. At this point, software projects require an enormous amount of human effort. Even relatively simple apps require a group of people to sit in front of a computer for eight hours a day, every day, forever. This wasn’t always the case, and there was a time when 50 people working on a software project wasn’t considered a “small team.” As long as software requires such concerted energy and so much highly specialized human focus, I think it will have the tendency to serve the interests of the people sitting in that room every day rather than what we may consider our broader goals. I think changing our relationship to technology will probably require making software easier to create, but in my lifetime I’ve seen the opposite come to pass. Unfortunately, I think distributed systems have a tendency to exacerbate this trend by making things more complicated and more difficult, not less complicated and less difficult.
Source: Moxie Marlinspike >> Blog >> My first impressions of web3

Update: Moxie Marlinspike has announced he’s stepping down as Signal CEO.

Pessimism of the intellect, optimism of the will.

Someone I once knew well used to cite Gramsci’s famous quotation: “Pessimism of the intellect, optimism of the will.” I’m having to channel that as I look forward to 2022.

Here’s the well-informed writer Charlie Stross on the ways he sees things panning out.

Climate: we're boned. Quite possibly the Antarctic ice shelves will be destablized decades ahead of schedule, leading to gradual but inexorable sea levels rising around the world. This may paradoxically trigger an economic boom in construction—both of coastal defenses and of new inland waterways and ports. But the dismal prospect is that we may begin experiencing so many heat emergencies that we destabilize agriculture. The C3 photosynthesis pathway doesn't work at temperatures over 40 degrees celsius. The C4 pathway is a bit more robust, but not as many crops make use of it. Genetic engineering of hardy, thermotolerant cultivars may buy us some time, but it's not going to help if events like the recent Colorado wildfires become common.

Politics: we’re boned there, too. Frightened people are cautious people, and they don’t like taking in refugees. We currently see a wave of extreme right-wing demagogues in power in various nations, and increasingly harsh immigration laws all round. I can’t help thinking that this is the ruling kleptocracy battening down the hatches and preparing to fend off the inevitable mass migrations they expect when changing sea levels inundate low-lying coastal nations like Bangladesh. The klept built their wealth on iron and coal, then oil: they invested in real estate, inflated asset bubble after asset bubble, drove real estate prices and job security out of reach of anyone aged under 50, and now they’d like to lock in their status by freezing social mobility. The result is a grim dystopia for the young—and by “young” I mean anyone who isn’t aged, or born with a trust fund—and denial of the changing climate is a touchstone. The propaganda of the Koch network and the Mercer soft money has corrupted political discourse in the US, and increasingly the west in general. Australia and the UK have their own turbulent billionaires manipulating the political process.

Source: Oh, 2022! | Charlie’s Diary

Laptops aren't what they used to be

This guy went back to using a Lenovo ThinkPad T430 and explains why in this post. Over Christmas, I replaced some of the cosmetic parts of my X220, which is also from 2012.

It’s amazing how usable it still is, and I actively prefer the keyboard over the more modern ones.

I’ve been using this setup for over a month now, and it has been surprisingly adequate. Yes, opening Java projects in IntelliJ will make things slow, and to record my desktop with OBS and acceptable performance, I had to drop my screen resolution to 720p. I can’t expect everything to work super well on this machine, but for a computer that’s released almost 10 years ago, it’s still holding up well.

I’d like to thank Intel here for making this possible. The CPU innovation stagnation between 2012-2017 has resulted in 4 cores still being an acceptable low-end CPU in early 2022. Without this, my laptop would likely be obsolete by now.

Source: Why I went back to using a ThinkPad from 2012

Somebody please tell the travel industry there's a climate emergency

Utter madness.

German giant Lufthansa said it would have to fly an additional 18,000 “unnecessary” flights through the winter to hold on to landing slots. Even if the holidays brought a big increase in passengers — marked by thousands of flight cancellations that left travelers stranded — the rest of the winter period could be slow as omicron surges worldwide.

Landing and departure slots for popular routes in the biggest airports are an extremely precious commodity in the industry, and to keep them, airlines have to guarantee a high percentage of flights. It is why loss-making flights have to be maintained to ensure companies keep their slots.

It was an accepted practice despite the pollution concerns, but the pandemic slump in flying put that in question. Normally, airlines had to use 80% of their given slots to preserve their rights, but the EU has cut that to 50% to ensure as few empty or near-empty planes crisscross the sky as possible.

Source: Near-empty flights crisscross Europe to secure landing slots | AP News

Covid immunity and medical breakthroughs

It seems like we’re learning a lot in a very short space of time about viruses and immunity. Happily, this might lead to breakthroughs in all sorts of areas.

We tend to think of immunity as something of an absolute – either we’re immune to a virus, or we’re not. But that hides a world of complications, says Danny Altmann, professor of medicine and immunology at Imperial College London. The genes that control our immunity are among the most diverse in the human body, he says, differing hugely from person to person.

[…]

But what explains this natural immunity? The most likely theory is that these people’s immune systems have already been exposed to similar viruses, years or decades earlier. Sars-Cov-2 is one of a family of seven human coronaviruses, most of which cause the common cold. All of these viruses look fairly similar. When your T-cells learn how to fight one, they get better at fighting them all, it is thought.

Another, less well-researched answer lies in our genes. Some people might simply be born with an immunity to certain viruses, scientists suspect.

[…]

If it turns out that some people are indeed naturally immune to Covid, it’s wonderful news for them. But it might also help the rest of us, speeding up development of a pan-coronavirus vaccine capable of defeating any variant. The current generation of Covid vaccines were all designed to target the spike protein, on the virus’s outer edge. But the spike protein also changes frequently, each time the virus mutates. This means vaccines are slightly less effective against each new variant.

But natural immunity appears to work differently. In the UCL trial, researchers looked carefully at the blood of those volunteers who seemed to have pre-existing immunity to the virus. Rather than targeting the spike protein, their T-cells were targeting proteins at the centre of the virus. These proteins are much less likely to change from mutation to mutation. In fact, they tend to be found in most coronaviruses, not just Sars-Cov-2. If a vaccine could be built to target these inner proteins, it might just be able to defeat all variants – as well as a range of other coronaviruses.

Source: Why some people keep getting Covid – and others never at all | The Telegraph

Jam tomorrow

The key to success traditionally has been to play the long game. If the system is rigged in your favour, that works. If it's not, then it's always "jam tomorrow".

Otegha Uwagba
Where does the idea that we can achieve, or should even be aiming for, endless productivity come from? Arguably we’ve been barrelling towards this conclusion for as long as capitalism has existed, but the technological advances of the past few decades have further eroded the barriers around work that stop it seeping into every aspect of our lives. There is a widespread cultural fetishisation of productivity, with overwork framed as a virtue by employers desperate to find ways to motivate a workforce for whom the traditional rewards – a decent salary, pension, job security – often no longer apply.
Source: This year, I stopped being productive. Why is it so hard to come to terms with that? | The Guardian

Everyone has something to teach

As someone who is apparently in a microgeneration between Generation X and Millennials, I feel constantly the tension between the “old ways” of doing things and the “throwing things against the wall to see what sticks” approach.

This article frames the issue nicely: everyone has something to teach, no matter whether you’re the person with lots of experience to share, or the person with the new approach.

Light patterns

Gaining experience takes time, effort, and often comes at the price of making painful mistakes. You don’t want to let those lessons go. You want them to mean something, to help you from making the same painful mistakes again. To help others from making the same mistakes you made. So it will always be the case that those with the most experience – and the good, smart, accurate wisdom that comes from it – will be the least willing to adapt their views as the world evolves.

Neither should be the case, because every generation cycles through the same process. Today’s older generation once understood the world better than their parents, who scoffed at them. Today’s younger generation will one day be stuck in the antiquated norms of their past, and their kids will scoff at them. I can imagine my son in 80 years screaming, “Get off my metaverse lawn!”

One takeaway from this is that no age has a monopoly on insight, and different levels of experience offer different kinds of lessons. Vishal Khandelwal recently wrote that old guys don’t understand tech, but young guys don’t understand risk. Another way to put it is: everyone has something to teach.

Source: Experts From A World That No Longer Exists · Collaborative Fund

Image: CC BY Tea, two sugars

Ignore the sociotechnics at your peril

This post is focusing on technical teams looking after software. But it can also apply to anything where systems are being developed and/or maintained.

Each set of markers we added to our system provided new context to form assumptions and frame our thinking. Everything in the visualization existed whether we were looking or not. It becomes clear when looked at this way that each of these dimensions is inextricably linked. It’s impossible to think holistically about software without thinking about the operational environment, or the users of the system, or the people involved in building and maintaining it. These things come together to create another lens through which we can view the world.

It’s important to point out that the final image here is still incomplete. We’ll never fit all of the contexts into a single model. We could keep going, adding more and more context. A fascinating one, for example, would be marking the beginning of the COVID pandemic, when a team that perhaps was colocated started working remotely, and when stress and risk of burnout increased considerably. Otherwise, we’ll eventually include the whole world, but it’s interesting to continually zoom out and see how a new lens helps frame our perspectives.

[…]

Many organizations have adopted the practice of doing “post-mortems” or “retrospectives” after incidents. Retrospectives are great! Unfortunately, I think a lot of learning is left on the table by the adoption of template-driven processes that produce shallow understandings of what transpired. I’ve spoken about how I think we can improve this. There are also experts in the field who provide training and consulting in incident analysis. There are also communities and companies dedicated to helping you improve this practice.

Source: Sociotechnical Lenses into Software Systems | Paul Osman

Wealth is a product of luck

This seems obvious to me: that luck plays a great part in success. Well, serendipity, perhaps which can always be given a helping hand by elite networks and pushy parents…

Definition of luck

The conventional answer is that we live in a meritocracy in which people are rewarded for their talent, intelligence, effort, and so on. Over time, many people think, this translates into the wealth distribution that we observe, although a healthy dose of luck can play a role.

But there is a problem with this idea: while wealth distribution follows a power law, the distribution of human skills generally follows a normal distribution that is symmetric about an average value. For example, intelligence, as measured by IQ tests, follows this pattern. Average IQ is 100, but nobody has an IQ of 1,000 or 10,000.

The same is true of effort, as measured by hours worked. Some people work more hours than average and some work less, but nobody works a billion times more hours than anybody else.

And yet when it comes to the rewards for this work, some people do have billions of times more wealth than other people. What’s more, numerous studies have shown that the wealthiest people are generally not the most talented by other measures.

What factors, then, determine how individuals become wealthy? Could it be that chance plays a bigger role than anybody expected? And how can these factors, whatever they are, be exploited to make the world a better and fairer place?

Source: If you’re so smart, why aren’t you rich? Turns out it’s just chance. | MIT Technology Review

Image: CC BY-ND fearthekumquat

Unsolicited advice might not be so bad after all?

I’ve followed Tressie McMillan Cottom on Twitter ever since she did a keynote for ALT a few years ago. In this article for The New York Times, she talks about ‘advice culture’.

Cottom is wonderfully forthright in her interactions on Twitter, so I was expecting her to rail against advice culture. Instead, she talks about it as a form of small talk, and (I suppose) a form of necessary social glue.

In the social media era, advice culture feels bigger and more pervasive than ever. After all, what is social media if not the gamification of advice? Every time we post something on Facebook or Twitter or Instagram, we are implicitly asking others to make a judgment of us. And we find ourselves unable to understand why someone would post or share an experience if not to solicit our evaluation. That is what “likes” and comments and “friending” has done to our brains...

Advice culture is so pervasive that it must serve some other function, do something more than assuage insecurities or performing status. Sociologists generally agree that advice is up there with small talk for how it facilitates human connection between strangers. But I recently began thinking advice is no longer a mere subset of small talk but has become our culture’s default common language. Advice is small talk. The decline of social associations like the Rotary Club and the bowling leagues not only weakened our connections to community; it also atrophied our linguistic tool kit.

Source: Why Everyone Is Always Giving Unsolicited Advice | The New York Times

Peeking around corners with holographic cameras

It's amazing to think that 10 years ago we thought we were only a few years away from fully autonomous vehicles. Even now, we're in the early stages of actually making them safe.

Blind corners have long troubled drivers, but they might not pose such a hazard for much longer. Researchers at Northwestern University have developed a new holographic camera technology that can peer around corners by reconstructing scattered light waves, quickly enough to spot fast-moving objects like cars or pedestrians.

When light strikes an object, it scatters, and some of that finds its way to our retinas, or the sensors of a camera, allowing the object to be seen. Of course, that means we can’t see objects behind other objects, or through scattering media like fog or skin. But there might be a way to use the scattering of light off multiple objects to see around corners.

Position a mirror just right, and you can see objects around corners. Even without a mirror, that principle still holds true – it’s just that the secondary object scatters the light too much for us to reconstruct the target. But an emerging technology called non-line-of-sight (NLoS) imaging can do just that.

NLoS systems work by beaming light out, which bounces off a surface, strikes an object and bounces back to the surface, then back to a sensor. Algorithms can then create an image of the object around a corner. As you might expect however, images reconstructed in this way can often be low resolution, or take too long to process.

Source: Holographic camera reconstructs objects around corners in milliseconds | New Atlas

The impact of a plant-based diet on migraines

Aged 18, I was rejected at the last hurdle from the Royal Air Force for a scholarship which would have paid for my university tuition. The reason? I’d just started suffering from migraines.

There are lots of different types of triggers, but common to all types is stress. That’s not so good if you’re looking to be employed in Fighter Command.

In many ways, I dodged a bullet (literally and metaphorically!) by not joining the RAF, but migraines have been a constant struggle. In the past few years I’ve had a lot fewer of them, something I put down to reducing my stress levels and taking an L-Theanine supplement every day.

However, this article focuses on the benefits of a plant-based diet for migraine sufferers. I stopped eating meat in 2017 and then eliminated fish too, turning vegetarian in January of this year. It looks like that might have been a great idea not only from an animal welfare point of view, but in terms of my own welfare too!

Green leafy vegetables

Health experts are calling for more research into diet and migraines after doctors revealed a patient who had suffered severe and debilitating headaches for more than a decade completely eliminated them after adopting a plant-based diet.

He had tried prescribed medication, yoga and meditation, and cut out potential trigger foods in an effort to reduce the severity and frequency of his severe headaches – but nothing worked. The migraines made it almost impossible to perform his job, he said.

But within a month of starting a plant-based diet that included lots of dark-green leafy vegetables, his migraines disappeared. The man has not had a migraine in more than seven years, and cannot remember the last time he had a headache. The case was reported in the journal BMJ Case Reports.

Source: Man’s severe migraines ‘completely eliminated’ on plant-based diet | Nutrition | The Guardian

Information is not knowledge (and knowledge is not wisdom)

Some reflections by Nick Milton on why knowledge management within organisations is so poor. If I were him, I would have included the below illustration from gapingvoid as I think it illustrates his five points rather well.

data, information, knowledge, insight, wisdom, impact

Firstly much of the knowledge of the organisation is never codified as information.

[…]

Secondly, a common problem (a corollary of the first) is that project knowledge may never have been recorded in project documents.

[…]

Thirdly, and a corollary to the first two, the vast majority of project information is not knowledge anyway. If you are relying on project documents as a source of knowledge, you will be relying on a very diluted source - a lot of noise and not much signal.

[…]

Fourthly, if there is codified knowledge in the project documents, it tends to be scattered across many documents and many projects.

[…]

Finally, many of the knowledge problems are cultural. People are incentivised to rush on to the next job rather than to spend time reflecting on lessons, no matter how important.

Source: Why you can’t solve knowledge problems with information tools alone | Knoco Stories

Start Often Finish rArely

I love this, and along with this post about the joy of watching films in black and white, led to me starting a new art project.

(Un)familiar

SOFA is the name of a hacker/art collective, and also the name of the principle upon which the club was founded.

The point of SOFA club is to start as many things as possible as you have the ability, interest, and capacity to, with no regard or goal whatsoever for finishing those projects.

[…]

You can be finished with your project whenever you decide to be done with it. And “done” can mean anything you want it to be. Whose standards of completion or perfection are you holding yourself to anyway? Forget about those! Something is done when you say it is. When it’s no longer interesting. When you’ve gotten a sufficient amount of entertainment and experience from it. When you’ve learned enough from it. Whatever, whenever. Done is what you say it is.

Source: 🛋 SOFA

Pain, suffering, and scuba diving

In this post, Derek Sivers shares his experience of a panic attack during a scuba diving trip and being calmed down by his instructor. He then subsequently used the same technique to help someone else who wasn’t OK on a later trip.

Pain and suffering are part of the human experience. For anyone ask “why me?” doesn’t make sense. There are those who dissimulate and those who don’t, but underneath it all there is hardship.

There is less stigma around therapy than there used to be, but I haven’t met anyone (me included!) who hasn’t transformed their life for the better after going through some form of counselling.

I learned a few lessons from this experience.

There are things in life we think won’t apply to us: Panic. Addiction. Depression.

I thought that was for other people. I thought I wasn’t that type. Why is this happening to me?

But I learned so much empathy that day. These things that only seem to happen to other people can happen to me. We’re not so different. It helps me recognize it in others, and be most helpful by remembering that feeling.

I imagine this is why people, who have been through really hard times, become counselors.

That day also reinforced the power of imitation. My teacher calmed me down so well that it was best to just imitate him.

Source: scuba, panic, empathy | Derek Sivers

Introspections on timewasting

What I’ve learned over the years from my own experience is that what one person calls a “waste of time” is the complete opposite for someone else. It also has a temporal/timing element to it, as well, I’d argue.

For example, I spent an inordinate amount of time on Twitter between 2007 and 2012, but this paid off massively in terms of my career. I learned a lot and made really valuable connections in the process. My wife thought it was a waste of my time. These days, it absolutely would be.

In this post, the author reflects on why they “waste time” and look at different reasons as well as various ways they’ve sought to combat it. As someone who’s been through therapy, I’d suggest that there’s always something below the surface that needs a skilled professional to draw out.

That’s what seems missing here.

"Scattered Thoughts, Like Scattered Leaves:-)" by mysza831 is licensed under CC BY 2.0

I spend way too much time on reddit, hacker news, twitter, feedly, coinbase, robinhood, email, etc. I’d like to spend time on things that are more meaningful - chess, digital painting, side projects, reading, video games, exercising, meditating, etc. This has been a problem for years.

This post is only slightly adapted from my personal notes, so excuse the lack of structure; I didn’t want to try to twist this jumble of thoughts into a narrative.

Source: Scattered Thoughts on Why I Waste My Own Time | Just a blog

Image: CC BY mysza831

Should teenagers be using social media? We probably already know the answer

While I’m not a fan of Nicholas Carr’s approach to technology (“is Google making us stupid?") I do have sympathy with Cal Newport’s more nuanced and considered approach.

Writing in The New Yorker, Newport considers whether we should be allowing teenagers to use social media at all. By this, he doesn’t mean the ‘social internet’, which I explore further in this post.

Our son turns 15 soon and while we’ve grudgingly allowed him to use WhatsApp (I don’t use any Facebook Meta products) he isn’t allowed an Instagram, Twitter, or TikTok account. Digital parenting is a thing.

I’m not sure, however, that we should be so quick to give up on interrogating the necessity of these technologies in our lives, especially when they impact the well-being of our children. In an attempt to keep this part of the conversation alive, I reached out to four academic experts—selected from both sides of the ongoing debate about the harm caused by these platforms—and asked them, with little preamble or instruction, the question missing from so much of the recent coverage of the Facebook revelations: Should teen-agers use social media? I wasn’t expecting a consensus response, but I thought it was important, at the very least, to define the boundaries of the current landscape of expert opinion on this critical issue.

[…]

For a particularly dispiriting case study of how long it sometimes takes to establish definitive causation between behaviors and negative outcomes, consider the effort involved in connecting smoking to lung cancer. The first major study showing a statistical correlation between cigarettes and cancer, authored by Herbert Lombard and Carl Doering of the Massachusetts Department of Public Health and the Harvard School of Public Health, was published in 1928. I recently came across an article in the archives of The Atlantic from 1956—nearly thirty years later—in which the author was still trying to convince skeptics who were unhappy with the types of confounding factors that are unavoidable in scientific studies. “If it has not been proved that tobacco is guilty of causing cancer of the lung,” the article pleads, “it has certainly been shown to have been on the scene of the crime.”

[...]

What is obvious, however, is that regardless of what answers we end up with, we need to keep debating these fundamental questions. As Zuckerberg emphasized in his defensive post, he wants us to concede that his products are inevitable, and that we have no choice but to move on to discussing their features and safeguards. We might think we’re really sticking it to these social-media giants when we skewer their leaders in congressional hearings, or write scathing commentary pieces about the shortcomings of their moderation policies, but, in some sense, this response provides a reprieve because it sidesteps the conversation that these companies are trying hardest to avoid: the conversation about whether, in the end, the buzzy, digital baubles they offer are really worth all the trouble they’re creating.

Source: The Question We’ve Stopped Asking About Teen-Agers and Social Media | The New Yorker

Freedom for the few vs. freedom for the many

My wife and I were talking about lockdowns yesterday given that we’re due to be travelling to the Netherlands next month and they’ve announced a partial lockdown. I can’t imagine something similar would be accepted in the UK — by which I mean it would probably be difficult to enforce.

Austria are imposing a nationwide lockdown for unvaccinated people. This sounds like a good solution for those vaccinated, but (a) there’s non-conspiracy reasons why people aren’t vaccinated, and (b) whatever means are used to prove vaccinated will be instantly forged.

It’s a problem, for sure, how to protect the freedoms of everyone.

Austrian Chancellor Alexander Schallenberg at a coronavirus meeting (Credit: Dragan Tatic)
Austria will impose a nationwide lockdown for people who have not been vaccinated against COVID-19, becoming the first country in the world to do so, Chancellor Alexander Schallenberg announced on Friday.

[…]

“A lockdown for the unvaccinated means one cannot leave one’s home unless one is going to work, shopping (for essentials), stretching one’s legs – exactly what we all had to suffer through in 2020,” Schallenberg said earlier, according to Reuters.

The lockdown for the unvaccinated has already been formally approved in Upper Austria, where restrictions have also been announced for the entire population. This includes a legal requirement to wear an FFP2 mask in all indoor public places and a ban on events for 3 weeks.

[…]

However, questions have been raised about the feasibility of a lockdown which applies to only a part of the population. “We don’t live in a police state and we can’t and don’t want to check every street corner,” Schallenberg said.

Source: Austria to declare nationwide lockdown for unvaccinated people | BNO News

Games as a cultural, educational, and predictive force

As a gamer, I grow frustrated with people who don’t consider games to be an art form and vehicle for stories comparable with other cultural pursuits.

Take for example one of the biggest games of this year, the recently released Battlefield 2042. Not only is it a technical and cultural milestone, but it presents a plausible timeline for how things could go given our current trajectory: climate, migration, wars, you name it.

2037

Humanity adapts to the new normal. Revolutions in energy, desert irrigation, hydraulic levees, and sea walls save coastal cities, reclaim farmland, and rebuild supply chains. Hope of finding stability leads to some nations re-opening their borders.

However, with no way to repatriate 1.2 billion people, No-Pats become a permanent fixture in all economic, military, and social policy making. Many No-Pats are still distrustful of the governments that exiled them and refuse calls to reassimilate. No-Pat leaders emerge, inspiring a new identity unbound to former nationality, drawing a line in the sand between the Old World and The New Normal. #WeAreNoPats becomes a rallying cry.

I highly recommend watching this 9-minute short film:

[embed]www.youtube.com/watch

Source: The World of 2042 | Electronic Arts

Big Tech companies may change their names but they will not voluntarily change their economics

I based a good deal of Truth, Lies, and Digital Fluency, a talk I gave in NYC in December 2019, on the work of Shoshana Zuboff. Writing in The New York Times, she starts to get a bit more practical as to what we do about surveillance capitalism.

As Zuboff points out, Big Tech didn’t set out to cause the harms it has any more than fossil fuel companies set out to destroy the earth. The problem is that they are following economic incentives. They’ve found a metaphorical goldmine in hoovering up and selling personal data to advertisers.

Legislating for that core issue looks like it could be more fruitful in terms of long-term consequences. Other calls like “breaking up Big Tech” are the equivalent of rearranging the deckchairs on the Titanic.

Democratic societies riven by economic inequality, climate crisis, social exclusion, racism, public health emergency, and weakened institutions have a long climb toward healing. We can’t fix all our problems at once, but we won’t fix any of them, ever, unless we reclaim the sanctity of information integrity and trustworthy communications. The abdication of our information and communication spaces to surveillance capitalism has become the meta-crisis of every republic, because it obstructs solutions to all other crises.

[…]

We can’t rid ourselves of later-stage social harms unless we outlaw their foundational economic causes. This means we move beyond the current focus on downstream issues such as content moderation and policing illegal content. Such “remedies” only treat the symptoms without challenging the illegitimacy of the human data extraction that funds private control over society’s information spaces. Similarly, structural solutions like “breaking up” the tech giants may be valuable in some cases, but they will not affect the underlying economic operations of surveillance capitalism.

Instead, discussions about regulating big tech should focus on the bedrock of surveillance economics: the secret extraction of human data from realms of life once called “private.” Remedies that focus on regulating extraction are content neutral. They do not threaten freedom of expression. Instead, they liberate social discourse and information flows from the “artificial selection” of profit-maximizing commercial operations that favor information corruption over integrity. They restore the sanctity of social communications and individual expression.

No secret extraction means no illegitimate concentrations of knowledge about people. No concentrations of knowledge means no targeting algorithms. No targeting means that corporations can no longer control and curate information flows and social speech or shape human behavior to favor their interests. Regulating extraction would eliminate the surveillance dividend and with it the financial incentives for surveillance.

Source: You Are the Object of Facebook’s Secret Extraction Operation | The New York Times

Momentum over details

I subscribed to Laura Olin’s newsletter recently, and the first issue I received mentioned how the late Dave Graeber “liked to… write propped up in the bathtub or lying on the floor; that way, it didn’t feel like work”.

She also links to another newsletter issue by Kate McKean, which I quote below, about just getting on with things, keeping the momentum going, and not getting stuck with details. It reminds me of my collaborations this week (in a good way!)

Angular momentum

Which leads me to my newest revelation: just write it down now! You can fix it later! Truly and honestly, you can fix it later. Do you know how many TKs are in my book right now? (TK is copyediting shorthand for I’ll fill it in later) Instead of trying figure out what day of the week it is in my book, it literally says “TK weekday.” I was in the middle of a scene when I realized one of the characters in it couldn’t physically be there at that time, so I deleted his name and wrote “TK some other guy” and kept going! I liked the scene! I’ll figure out that TK later! Or I will cut it! The words exist now, which means they can be edited, refined, deleted, kept. You’re going to edit your book five thousand times; you won’t notice one more. And for those of you who worry that’s going to add time to an already long and arduous process, well, I say that if it gets you to The End faster, even if you have to go back to all those TK’s, you’re that much better off.
Source: How to Like What You Write | Kate McKean

Image: CC BY-NC-ND Alan Bloom

Aimless wandering in search of the unknown catalyst

It might not be too much of a stretch to describe Edward Snowden as a hero of mine. I’m not sure what he’s still doing in Russia, but the moral conviction it took to do what he did is staggering.

He writes in exile through a newsletter which is well worth subscribing to. In his most recent missive, he talks about lacking what he calls “origination energy”. On a much smaller level and more insignificant level, I lack this too — especially at this time of year.

So as the young people say, I feel seen.

Edward Snowden poster

For a long time now, I’ve wanted to write to you, but found myself unable. Not from illness—although that came and went—but because I refuse to put something in your inbox that I feel isn’t worth your time.

The endless stream of events that the world provides to remark upon has the tendency to take on an almost physical weight, and robs me of what I can only describe as origination energy: the creative spark that empowers us not simply to do something, but to do something new. Without it, even the best of what I can produce feels derivative and workmanlike—good enough for government, perhaps, but not good enough for you.

I suspect you may know a similar struggle—you can tell me how you fight it below, if you like—but my only means for overcoming it is an aimless wandering in search of the unknown catalyst that might help me to refill my emptied well. Where once I might have had a good chance of walking away inspired by the empathy I felt while watching a sad, sad film, achieving such inspiration feels harder now, somehow. I have to search farther, and wander longer, across centuries of painting and music until at last, when passing by a dumpster, yesterday’s internet comment might suddenly pop into my head and blossom there, as if a poem. The thing—the artifact itself—doesn’t matter, so much as what it does for me—it enlivens me.

This, to me, is art.

Source: Cultural Revolutions | Edward Snowden

Image CC BY-NC-ND: Antonio Marín Segovia

Surveillance vs working openly

Austin Kleon is famous for his book Show Your Work, something that our co-op references from time to time, as it backs up our belief in working openly.

However, as Kleon points out in this post, it doesn’t mean you need to livestream your creative process! For me, this is another example of the tension between being able to be a privacy advocate at the same time as a believer in sharing your work freely and openly.

It’s bad enough trying to create something when nobody’s watching — the worst trolls are the ones that live in your head!

The danger of sharing online is this ambient buildup of a feeling of being surveilled.

The feeling of being watched, or about to be watched.

You have to disconnect from that long enough to connect with yourself and what you’re working on.

Source: You can’t create under surveillance | Austin Kleon

A Timeline of Earth's Average Temperature

I can’t argue with every anti-vaxxer and climate denialist, but I can debate a few on my timeline. And this xkcd is another useful thing to point towards when they talk about how climate change is nothing new…

Source: xkcd: Earth Temperature Timeline

Platform power and infrastructure

John Naughton writes notes that we need to describe a fourth kind of power alongside compelling us to do something, stopping us from doing something, or shaping the way we think.

Image by fabio on Unsplash

The great unsolved problem of our time is how to deal with – and where necessary curb – the unaccountable power of these giants. The first step on that road is to reach a collective understanding of what kind of power they actually wield. And for that we need a taxonomy. In an earlier era, the political theorist Steven Lukes proposed one. There were, he said, three kinds of power: the ability to compel people to do what they don’t want to do, the ability to stop them doing something they want to do and the ability to shape the way they think. This last one was useful in addressing the power of influential media owners (Rupert Murdoch, for example) in the old media ecosystem. But although it still applies in some ways to social media, it’s less useful for the networked ecosystem we now inhabit; we need another category.

“Platform power” is one possibility. The tech giants all possess it to a greater or lesser degree. In Apple’s case, for example, it owns and controls two important platforms – ie, software systems on which other agents can build businesses: they are the operating systems on which its devices run and its app store, which decides what apps are allowed on Apple devices. Google owns several platforms – a search engine and its associated advertising marketplace, the Android mobile operating system, YouTube and Google cloud services. Facebook (whose holding company is now rebranded as Meta) also owns several – Facebook, Instagram and WhatsApp; Twitter owns, er, Twitter; Amazon owns its marketplace and cloud services; and Microsoft owns the Windows/Office platform and a fast-growing cloud service, Azure.

Source: How can we tame the tech giants now that they control society’s infrastructure? | The Guardian

Proving endemic racism and sexism in the world of football

Anyone who follows football will perhaps be disappointed yet unsurprised that racism and sexism continue to be part of the beautiful game.

This study is clever in the way that it shows that those watching football matches use coded language and are biased against women. Hopefully, it will help all of us figure out better ways forward.

(I actually really enjoy watching women’s football with my family!)

The resulting paper, “Pace and Power: Removing unconscious bias from soccer broadcasts,” caused a stir when they presented it at last month’s New England Symposium on Statistics in Sports. Of the 47 sports fans who watched a two-minute clip of the World Cup TV broadcast, 70 percent said that Senegal, whose players were all Black, was “more athletic or quick.” But of 58 others who saw an animation of the same two minutes without knowing which teams they were watching, 62 percent picked Poland, whose players were all white, as the more athletic side.1 The physical advantages that supposedly defined the African team’s style of play disappeared as soon as their skin color did.

[…]

The athleticism flip-flop offers a new kind of evidence of a prejudice that affects how Black players of every nationality are perceived. For decades, researchers have documented media stereotypes of African players as “‘powerful,’ ‘big-thighed,’ ‘lithe of body,’ ‘big,’ ‘explosive,’ and like ‘lightning,’ attributes that were to be contrasted with ‘the know-how that England possess.’” As Belgian forward Romelu Lukaku, who is Black, told The New York Times, “It is never about my skill when I am compared to other strikers.” Now, for the first time, researchers have a way to isolate how race influences direct perceptions of the game.

Interestingly, they also looked at gender as well as race:

The study also examined attitudes toward gender by showing viewers a pair of two-minute clips, one from the American top-flight National Women’s Soccer League and another from League Two, the English men’s fourth tier. Even though the NWSL draws more fans to games, its average player earns about a quarter as much as the average player in League Two. Gregory and Pleuler were curious whether this “clear gender pay gap” could be explained by a difference in the quality of the soccer shown on TV, as some have argued.

People who watched the broadcasts said that the men’s game was “higher quality” by a 57 percent to 43 percent margin. Those who saw the renders with genderless stick figures preferred the women’s match, 59 percent to 41 percent. The results weren’t statistically significant across a small sample of 105 mostly male respondents, but Pleuler believes the line of research is promising. “I think these results are suggestive that your average soccer fan can’t tell the difference between something that does have a large investment level and the women’s game, which does not,” he said.

Source: Soccer Looks Different When You Can’t See Who’s Playing | FiveThirtyEight

Just Don't Do It

This isn’t an easy article to cite, mainly because I want to quote both it and some commentary by Andrew Curry. The original article is paywalled, so I’m going to rely on Curry’s quotations.

I’m particularly interested in this because I’m one of the oldest Millennials (I was born nine days before the end of 1980). There’s something about my generation whereby we’re just not going to take that Boomer shit any more.

It turns out the latest moral panic about work —at least, abut our contemporary idea of work—is being fuelled by ‘The Great Resignation’ in the US, which I wrote about here recently. (‘The four Rs of post-pandemic America.’)

One of the elements of this is that it is Millennials who are disproportionately more likely to quit. One might say, ‘what are these young people thinking of?’, were it not for the fact that the oldest Millennials are 41 this year; half a lifetime in, in other words.

The writer Erin Lowry, who has written multiple books on Millennials, and is a Millennial herself, is having none of it. In a (partly gated) short column in Bloomberg, she suggests instead that the games’s up for the version of work that has been normalised in the last two decades.

Curry quotes Lowry as saying:

After 18 months of pandemic uncertainty altering how we work, it makes sense we’d return to the questions of why we work, and how our jobs affect our quality of life. Is there perhaps another way to earn an income that better aligns with our overall goals? Couldn’t we create a future of no longer using a career as the primary or sole basis of our identity and self-satisfaction? Shouldn’t this be a moment to consider how to work to live instead of live to work?

[…]

We can theorize that this burnout comes from the increasingly blurred boundaries between being on and off the clock. From being conditioned to believe that appearing “always available” is the hallmark of a promotable employee. From jobs that once required a high school diploma suddenly demanding a bachelor’s degree, forcing young people to get mired in never-before-seen levels of student loan debt.

Source: Work | Ancestors | Just Two Things

Carbon emissions per km

Now that I'm not flying any more, I need to figure out ways to get to places where I'd usually travel by plane.

For example, I'm travelling to the Netherlands next month. Fortunately, instead of having to go down from the north of England, through London, and then across to Paris and then Amsterdam, I can take the ferry.

But what about the carbon emissions of ferries? Thankfully, for foot passengers they are, on average, even smaller than those of coaches.

Emissions from different modes of transport

Train virtually always comes out better than plane, often by a lot. A journey from London to Madrid would emit 43kg (95lb) of CO2 per passenger by train, but 118kg by plane (or 265kg if the non-CO2 emissions are included), according to EcoPassenger.

[...]

The [Department for Business, Energy and Industrial Strategy] has also put a figure on ferry transport - 18g of CO2 per passenger kilometre for a foot passenger, which is less than a coach, or 128g for a driver and car, which is more like a long-haul flight.

Source: Climate change: Should you fly, drive or take the train?

Whitelabelling Stadia tech

I recognise that this is rather niche, but as a fan of Google Stadia, this is great to see. For some reason, there’s a lot of people who seem to want cloud gaming… not to work?

The thing is that it’s here, working, and pretty awesome. Yes, there’s rough edges at times, but the technology and culture around it is a lot less mature than the more traditional model.

“This is being powered by the Stadia technology,” an AT&T representative tells The Verge. “For this demo AT&T created a front end experience to enable gamers to play Batman Arkham Knight directly from their own website and the game is playable on virtually any computer or laptop.” Although AT&T calls it a “demo,” there’s no mention of length, as shown in a short walkthrough of the experience.

AT&T also adds that you can stream Arkham Knight at up to 1080p and 60fps, which is the same performance you’ll get if you use Stadia for free. Paid Stadia Pro subscribers have the ability to stream up to 4K at 60fps, for which AT&T doesn’t offer an option. On the Arkham Knight page, AT&T notes that the game will be “available for a limited time.”

We also asked AT&T whether the company would be working on similar Stadia-powered games in the future or if it planned on establishing a game streaming service for customers. AT&T wasn’t able to share any additional details, but its dip into Stadia technology may open the door for other companies to follow suit.

Source: AT&T is white-labeling Google Stadia to give you free Batman game streaming | The Verge

Build your 'castle' on land you own and control

This post is ostensibly about marketing a game studio, but it has wider lessons for all kinds of creators. Long story short? Don’t get seduced by ‘exposure’ but instead spend your time directing people towards places that you own and control.

The metaphors and graphics used are lovely, so be sure to click through and read it in its entirety!

Your game studio is basically your land. You are the king. You can do whatever you want on this plot of land and kick out who you want, charge what you want. Set the rules.

Goal here: You want to grow from this little tiny hamlet to a giant castle. You also want a bunch of people in your kingdom living there (aka playing your games), and paying you taxes (buying your games) and telling you how brilliant of a leader you are (fan mail, fan art) and enjoying the company of your kingdom’s fellow citizens (community engagement).

[…]

It is hard to make people leave a social media site. But you need to work hard at it.

With every single person who enters your castle in a foreign land, tell them “welcome, yes my castle is nice here, but did you know I do better stuff over there in that Kingdom across the sea?”

Always be working to get people over to your land.

Source: Don’t build your castle in other people’s kingdoms | How To Market A Game

Retro football gaming FTW

The amount of nostalgia that this article gave me was incredible. I have spent more time with the Championship Manager series of games (now Football Manager) than any game. Even FIFA.

Perhaps because it was a formative period in my life, but Championship Manager 93 and Championship Manager Italia were perhaps may favourite. I can remember playing one of them on the bus to a football tournament in Blackpool on my Dad’s laptop (with a monochrome screen!)

Theoretically, you can play these games via archive.org. But the links weren’t working for me when I tried them…

Championship Manager screenshot

And then there was Championship Manager. The king of them all. It’s hard to explain why. It took 10 minutes to get through the classified results each week. There were about three tactics you could adopt. There was no training. But somehow it just worked. That old man on the cover pretending to be a manager pointing at you in his sheepskin coat.

[…]

I recently downloaded Championship Manager 93 on my laptop. It runs slightly more quickly on a Mac. I took Cambridge United to the top of the Premier League. A front line of Darrell Powell, Dominic Iorfa and Nick Barmby were unstoppable.

Source: No passing, no training, seven discs: the joys of 90s football gaming | The Guardian

How to communicate remotely

I’ve worked from home for almost a decade now and still find posts like this incredibly instructive. Not only does Olivier Lacan go through gear, but also how to set it up.

In addition, there’s a few useful tips in here about remote etiquette and when to jump on a call instead of continuing a back-and-forth via text.

By definition being remote means not being there. But feeling present goes a long way. A simple look can trigger a strong reaction and a sense of shared understanding. A slight change in intonation can convey doubt or excitement better than a paragraph. Cameras can’t magically make your expressions visible when light isn’t bouncing off your face. Backlighting or contre-jour for example is a very common mistake that I see very smart people make over and over again, even during important video calls featuring very important people you’d assume would have staff to assist them.

[…]

The one-stop-shop doesn’t exist quite yet, but I can tell you from experience that you can already communicate remotely with higher fidelity than the majority of office workers through the world did even before the pandemic. While your three-dimensional presence will never be replaceable, it’s possible for two-way communication to have an unprecendented amount of subtlety.

[…]

It’s the responsibility of employers to deploy the kind of budgets already allocated toward in-office communication to remote work equipment. It’s also the role of folks like me (and you) to help educate IT departments and business leaders on hardware solutions that already exist today.

It has become quite absurd to argue that remoteness has to mean becoming a less visible and valued contributor to your organization. I hope this post can help you convince anyone who might still believe that communicating remotely still has to be a pain.

Source: High Fidelity Remote Communication | Olivier Lacan

Exploration pays long-term dividends for your career

This article focuses on the work of Dashun Wang, an economist at Northwestern University, who has looked at ‘hot streaks’ in the careers of regular people.

It comes down, apparently, to exploring areas and then exploiting them. I would translate that into British English as “pissing around with stuff that looks interesting until you find a use for it”.

The conventional wisdom is that hot streaks happen in our middle age. One famous analysis of scientists and inventors found that their ability to produce Nobel Prize–winning insights and landmark technological contributions peaks between the ages of 35 and 40. Another analysis of “age-genius curves” for jazz musicians found that musical productivity rises steadily until about the age of 40 and then declines sharply.

Wang’s analysis—which used a broader measure of productivity for a much larger group of people—didn’t find anything special about the productivity of middle-aged people. Instead, hot streaks were equally likely to happen among young, mid-career, and late-career artists and scientists. Other theories fell flat too. Maybe, he thought, getting hot is a numbers game, and hot streaks happen when you produce the most work. Or maybe extremely successful work periods are all about focusing on one specific type of art or scientific discipline—as the 10,000-hours-of-practice rule popularized in Malcolm Gladwell’s book Outliers suggests. Or maybe hot streaks are more about who else we’re working with, and we’re most successful when we cozy up to superstars in our domain. But no explanation fit the data set.

Until this year. This summer, Wang and his co-authors published their first grand theory of the origin of hot streaks. It’s a complicated idea that comes down to three words: Explore, then exploit.

Source: Hot Streaks in Your Career Don’t Happen by Accident | The Atlantic

Is this a Signal backdoor?

Maybe this is nothing. Maybe it’s something. But when an Open Source messaging app claims to need to make part of it closed source, maybe there’s something going on?

There are plenty of Open Souce solutions for email and commenting systems, so Free and Open Source (FLOSS) enthusiasts are entirely justified in wondering: is this a government backdoor?

We build Signal in the open, with publicly available source code for our applications and servers. To keep Signal a free global communication service without spam, we must depart from our totally-open posture and develop one piece of the server in private: a system for detecting and disrupting spam campaigns. Unlike encryption protocols, which are designed to be provably secure even if everyone knows how they work, spam detection is an ongoing chore for which there is no concrete resolution and for which transparency is a major disadvantage. If we put this code on the Internet alongside everything else, spammers would just read it and adjust their tactics to gain an advantage in the cat-and-mouse game of keeping spam off the network. The Signal protocols, cryptography, and source code are peer reviewed, shared for independent inspection, and provably private by design. We are bound by these security guarantees, so that your conversations and contacts remain as private and protected as ever, even if we keep spam-fighting tools out of sight.
Source: Improving first impressions on Signal | Signal blog

Taking the long view on weekly working hours

I find comparative analysis of working patterns absolutely fascinating. What counts as work? What does it mean to be productive? What is the context around work?

While I can’t remember where I came across it, this analysis takes an eight-century long view on working hours. It turns out that these days most of us work more than medieval peasants did…

Peasants relaxing in a field / working

One of capitalism's most durable myths is that it has reduced human toil. This myth is typically defended by a comparison of the modern forty-hour week with its seventy- or eighty-hour counterpart in the nineteenth century. The implicit -- but rarely articulated -- assumption is that the eighty-hour standard has prevailed for centuries. The comparison conjures up the dreary life of medieval peasants, toiling steadily from dawn to dusk. We are asked to imagine the journeyman artisan in a cold, damp garret, rising even before the sun, laboring by candlelight late into the night.

[…]

The contrast between capitalist and precapitalist work patterns is most striking in respect to the working year. The medieval calendar was filled with holidays. Official – that is, church – holidays included not only long “vacations” at Christmas, Easter, and midsummer but also numerous saints' andrest days. These were spent both in sober churchgoing and in feasting, drinking and merrymaking. In addition to official celebrations, there were often weeks' worth of ales – to mark important life events (bride ales or wake ales) as well as less momentous occasions (scot ale, lamb ale, and hock ale). All told, holiday leisure time in medieval England took up probably about one-third of the year. And the English were apparently working harder than their neighbors. The ancien règime in France is reported to have guaranteed fifty-two Sundays, ninety rest days, and thirty-eight holidays. In Spain, travelers noted that holidays totaled five months per year.

Source: Preindustrial workers worked fewer hours than today’s

Climate optimism

COP26 has started, and it’s easy to be cynical and defeatist about the whole thing. But this article in The Guardian offers some glimmers of hope, somewhat in the vein of the excellent Future Crunch newsletter.

Wind turbine

The real fuel for the green transition is a combination of those most valuable and intangible of commodities: political will and skill. The supply is being increased by demands for action from youth strikers to chief executives, and must be used to face down powerful vested interests, such as the fossil fuel, aviation and cattle industries. The race for a sustainable, low-carbon future is on, and the upcoming Cop26 climate talks in Glasgow will show how much faster we need to go.
Source: Reasons to be hopeful: the climate solutions available now | Climate crisis | The Guardian

Middle class pursuit of pain through endurance sports is a thing

Oh this is fascinating. Get to your forties and everyone seems to be interested in marathons, triathlons, and putting on lycra to go and cycle somewhere.

This article explains that this is a function not only of access to the required time and money, but is a deep-seated need for those who are doing well out of the capitalist system.

Participating in endurance sports requires two main things: lots of time and money. Time because training, traveling, racing, recovery, and the inevitable hours one spends tinkering with gear accumulate—training just one hour per day, for example, adds up to more than two full weeks over the course of a year. And money because, well, our sports are not cheap: According to the New York Times, the total cost of running a marathon—arguably the least gear-intensive and costly of all endurance sports—can easily be north of $1,600.

[…]

There are a handful of obvious reasons the vast majority of endurance athletes are employed, educated, and financially secure. As stated, the ability to train and compete demands that one has time, money, access to facilities, and a safe space to practice, says William Bridel, a professor at the University of Calgary who studies the sociocultural aspects of sport. “The cost of equipment, race entry fees, and travel to events works to exclude lower socioeconomic status individuals,” he says, adding that those in a higher socioeconomic bracket tend to have nine-to-five jobs that provide some freedom to, for example, train before or after work or even at at lunch. “Almost all of the non-elite Ironman athletes who I’ve interviewed for my research had what would be considered white-collar jobs and commented on the flexibility this provided,” says Bridel.

[…]

Even so, there are myriad ways for relatively comfortable middle-to-upper-class individuals to spend their time and money. What is it about the voluntary suffering of endurance sports that attracts them?

This is a question sociologists are just beginning to unpack. One hypothesis is that endurance sports offer something that most modern-day knowledge economy jobs do not: the chance to pursue a clear and measurable goal with a direct line back to the work they have put in. In his book Shop Class as Soulcraft: An Inquiry into the Value of Work, philosopher Matthew Crawford writes that “despite the proliferation of contrived metrics,” most knowledge economy jobs suffer from “a lack of objective standards.”

[…]

Another reason white-collar workers are flocking to endurance sports has to do with the sheer physicality involved. For a study published in the Journal of Consumer Research this past February, a group of international researchers set out to understand why people with desk jobs are attracted to grueling athletic events. They interviewed 26 Tough Mudder participants and read online forums dedicated to obstacle course racing. What emerged was a resounding theme: the pursuit of pain.

“By flooding the consciousness with gnawing unpleasantness, pain provides a temporary relief from the burdens of self-awareness,” write the researchers. “When leaving marks and wounds, pain helps consumers create the story of a fulfilled life. In a context of decreased physicality, [obstacle course races] play a major role in selling pain to the saturated selves of knowledge workers, who use pain as a way to simultaneously escape reflexivity and craft their life narrative.” The pursuit of pain has become so common among well-to-do endurance athletes that scientific articles have been written about what researchers are calling “white-collar rhabdomyolysis,” referring to a condition in which extreme exercise causes kidney damage.

Source: Why Do Rich People Love Endurance Sports? - Outside Online

Why large tree-planting initiatives often fail

‘Carbon offsetting’ is just a way of the western middle classes assuaging their climate guilt. We can do better by thinking holistically.

In one recent study in the journal Nature, for example, researchers examined long-term restoration efforts in northern India, a country that has invested huge amounts of money into planting over the last 50 years. The authors found “no evidence” that planting offered substantial climate benefits or supported the livelihoods of local communities.

The study is among the most comprehensive analyses of restoration projects to date, but it’s just one example in a litany of failed campaigns that call into question the value of big tree-planting initiatives. Often, the allure of bold targets obscures the challenges involved in seeing them through, and the underlying forces that destroy ecosystems in the first place.

Instead of focusing on planting huge numbers of trees, experts told Vox, we should focus on growing trees for the long haul, protecting and restoring ecosystems beyond just forests, and empowering the local communities that are best positioned to care for them.

Source: Climate change: How to plant trillions of trees without hurting people and the planet | Vox

Securing your digital life

Usually, guides to securing your digital life are very introductory and basic. This one from Ars Technica, however, is a bit more advanced. I particularly appreciate the advice to use authenticator apps for 2FA.

Remember, if it’s inconvenient for you it’s probably orders of magnitude more inconvenient for would-be attackers. To get into one of my cryptocurrency accounts, for example, I’ve set it so I need a password and three other forms of authentication.

Overkill? Probably. But it dramatically reduces the likelihood that someone else will make off with my meme stocks…

Security measures vary. I discovered after my Twitter experience that setting up 2FA wasn’t enough to protect my account—there’s another setting called “password protection” that prevents password change requests without authentication through email. Sending a request to reset my password and change the email account associated with it disabled my 2FA and reset the password. Fortunately, the account was frozen after multiple reset requests, and the attacker couldn’t gain control.

This is an example of a situation where “normal” risk mitigation measures don’t stack up. In this case, I was targeted because I had a verified account. You don’t necessarily have to be a celebrity to be targeted by an attacker (I certainly don’t think of myself as one)—you just need to have some information leaked that makes you a tempting target.

For example, earlier I mentioned that 2FA based on text messages is easier to bypass than app-based 2FA. One targeted scam we see frequently in the security world is SIM cloning—where an attacker convinces a mobile provider to send a new SIM card for an existing phone number and uses the new SIM to hijack the number. If you’re using SMS-based 2FA, a quick clone of your mobile number means that an attacker now receives all your two-factor codes.

Additionally, weaknesses in the way SMS messages are routed have been used in the past to send them to places they shouldn’t go. Until earlier this year, some services could hijack text messages, and all that was required was the destination phone number and $16. And there are still flaws in Signaling System 7 (SS7), a key telephone network protocol, that can result in text message rerouting if abused.

Source: Securing your digital life, part two: The bigger picture—and special circumstances | Ars Technica

The permanent mask

I’m sharing this mainly for the blackout poetry, but I also appreciate the quotation from Nabakov that Austin Kleon shares in this post.

As I explained in my checking out of therapy post, you can “paint yourself into a rather unhelpful corner by being the person everyone else expects you to be”. Taking off that mask can be liberating.

I don’t think that an artist should bother about his audience. His best audience is the person he sees in his shaving mirror every morning. I think that the audience an artist imagines, when he imagines that kind of a thing, is a room filled with people wearing his own mask.
Source: Inside the mask | Austin Kleon

Why go back to normal when you weren't enjoying it in the first place?

Shop shutters painted with sun mural

Writing in Men's Health, and sadly not available anywhere I can link to, Will Self reflects on what we've collectively learned during the pandemic.

In it, he uses a quotation from Nietzsche I can't seem to find elsewhere, "There are better things to be than the merely productive man". I definitely feel this.

[T]he mood-music in recent months from government and media has all been about getting back to normal. So-called freedom. Trouble is... people from all walks of life and communities [have] expressed a reluctance to resume the lifestyle they were enjoying before March of last year. Quite possibly this is because they weren't really enjoying that much in the first place — and it's this that's been exposed by the pandemic and its associated measures.

The difficulty, I think, is that lots of people (me included at times) had pre-pandemic lives that they would probably rate a 6/10. Not terrible enough for the situation by itself to be a stimulus for change. But not, after a break, the thought of returning to how things were sounds... unappetising.

We all know the unpleasant spinning-in-the-hamster-wheel sensation that comes when we're working all hours with the sole objective of not having to work all hours — it traps us in a moment that's defined entirely by stress-repeating-anxiety, a feeling that mutates all too easily into full-blown depression. And we're not longer the sort of dualists who believe that psychological problems have no bodily correlate — on the contrary, we all understand that working too hard while feeling that work to be valueless can take us all the way from indigestion to an infarct.

I've burned out a couple of times in my life, which is why these days I feel privileged to be able to work 25-hour weeks by choice. There's more to life than looking (and feeling!) "successful".

It's funny, I have more agency and autonomy than most people I know, yet I increasingly resent the fact that this is dependent upon some of the very technologies I've come to realise are so problematic for society.

[I]t might be nice in the way of 18 months of being told what to do, to feel one was telling one's self what to do. One way of conceptualising the renunciation necessary to cope with the transition from a lifestyle where everything can be bought to one in which both security and satisfaction depend on more abstract processes, is to critique not just the unhealthy economy but the pathological dependency on technology that is its sequel.

Ultimately, I think Will Self does a good job of walking a tightrope in this article in not explicitly mentioning politics. The financial crash, followed by austerity, Brexit, and now the pandemic, have combined to hollow out the country in which I live.

The metaphor of a pause button has been overused during the pandemic. That's for a reason: most of us have had an opportunity, some for the first time in their lives, to stop and think what we're doing — individually and collectively.

What comes next is going to be interesting.


Not a sponsored mention by any means, but just a heads-up that I read this article thanks to my wife's Readly subscription. It's a similar monthly price to Netflix, but for all-you-can-read magazines and newspapers!

Brand-safe influencers and the blurring of reality

Earlier this week, in a soon-to-be released episode of the Tao of WAO podcast, we were talking about the benefits and pitfalls of NGOs like Greenpeace partnering with influencers. The upside? Engaging with communities that would otherwise be hard-to-reach. The downside? Influencers can be unpredictable.

It’s somewhat inevitable, therefore, that “brand-safe” fictional influencers would emerge. As detailed in this article, not only are teams of writers creating metaverses in which several characters exist, but they’re using machine learning to allow fans/followers to “interact”.

The boundary between the real and fictional is only going to get more blurred.

FourFront is part of a larger wave of tech startups devoted to, as aspiring Zuckerbergs like to say, building the metaverse, which can loosely be defined as “the internet” but is more specifically the interconnected, augmented reality virtual space that real people share. It’s an undoubtedly intriguing concept for people with a stake in the future of technology and entertainment, which is to say, the entirety of culture. It’s also a bit of an ethical minefield: Isn’t the internet already full of enough real-seeming content that is a) not real and b) ultimately an effort to make money? Are the characters exploiting the sympathies of well-meaning or media illiterate audiences? Maybe!

On the other hand, there’s something sort of darkly refreshing about an influencer “openly” being created by a room of professional writers whose job is to create the most likable and interesting social media users possible. Influencers already have to walk the delicate line between aspirational and inauthentic, to attract new followers without alienating existing fans, to use their voice for change while remaining “brand-safe.” The job has always been a performance; it’s just that now that performance can be convincingly replicated by a team of writers and a willing actor.

Source: What’s the deal with fictional influencers? | Vox

Psychological hibernation

I can’t really remember what life was like before having children. Becoming a parent changes you in ways you can’t describe to non-parents.

Similarly, if we tried to go back in time and explain how the pandemic has changed us, how we’re more susceptible to burnout, less up for meeting with other people, it would be almost impossible to do.

One term that might be useful, however, is ‘psychological hibernation’ — as this article explains.

Was it always like this? Can anyone actually remember what it was like before? For some reason, coming up with an answer to that question is like recalling a boring dream: the more you attempt to remember the details of life before Covid, the quicker it fades, as if it never happened at all.

In 2018, a group of psychologists in the Antarctic published a report that may help us understand our current collective exhaustion. The researchers found that the emotional capacity of people who had relocated to the end of the world had been significantly reduced in the time they had been there; participants living in the Antarctic reported feeling duller than usual and less lively. They called this condition “psychological hibernation”. And it’s something many of us will be able to relate to now.

“One of the things that we noticed throughout the pandemic is that people started to enter this phase of psychological hibernation,” said Emma Kavanagh, a psychologist specialising in how people deal with the aftermath of disasters. “Where there’s not many sounds or people or different experiences, it doesn’t require the brain to work at quite the same level. So what you find is that people felt emotionally like everything had just been dialled back. It looks a lot like burnout, symptom wise.” Kavanagh continued: “I think that happened to us all in lockdown, and we are now struggling to adapt to higher levels of stimulus.”

Source: The great Covid social burnout: why are we so exhausted? | New Statesman

Twitter acknowledges right-wing bias in its algorithmic feed

I mentioned on Twitter last week how I noticed that I keep getting recommended stories about Nigel Farage and from outlets on the political right wing like The Telegraph.

Lo and behold, Twitter has published findings from its own investigation which found that its algorithms actively promote right wing accounts and news sources. Now I hope it does something about it.

Twitter logo

What did we find?

— Tweets about political content from elected officials, regardless of party or whether the party is in power, do see algorithmic amplification when compared to political content on the reverse chronological timeline.

— Group effects did not translate to individual effects. In other words, since party affiliation or ideology is not a factor our systems consider when recommending content, two individuals in the same political party would not necessarily see the same amplification.

— In six out of seven countries — all but Germany — Tweets posted by accounts from the political right receive more algorithmic amplification than the political left when studied as a group.

— Right-leaning news outlets, as defined by the independent organizations listed above, see greater algorithmic amplification on Twitter compared to left-leaning news outlets. However, as highlighted in the paper, these third-party ratings make their own, independent classifications and as such the results of analysis may vary depending on which source is used.

Source: Examining algorithmic amplification of political content on Twitter | Twitter blog

Otters vs. Possums

It’s an odd metaphor, but the behaviours described in terms of internet communities are definitely something I’ve witnessed in 25 years of being online.

(This post is from 2017 but popped up on Hacker News recently.)

Otters

There’s a pattern that inevitably emerges, something like this:
  1. Community forms based off of a common interest, personality, value set, etc. We’ll describe “people who strongly share the interest/personality/value” as Possums: people who like a specific culture. These people have nothing against anybody, they just only feel a strong sense of community from really particular sorts of people, and tend to actively seek out and form niche or cultivated communities. To them, “friendly and welcoming” community is insufficient to give them a sense of belonging, so they have to actively work to create it. Possums tend to (but not always) be the originators of communities.

  2. This community becomes successful and fun

  3. Community starts attracting Otters: People who like most cultures. They can find a way to get along with anybody, they don’t have specific standards, they are widely tolerant. They’re mostly ok with whatever sort of community comes their way, as long as it’s friendly and welcoming. These Otters see the Possum community and happily enter, delighted to find all these fine lovely folk and their interesting subculture.(e.g., in a christian chatroom, otters would be atheists who want to discuss religion; in a rationality chatroom, it would be members who don’t practice rationality but like talking with rationalists)

  4. Community grows to have more and more Otters, as they invite their friends. Communities tend to acquire Otters faster than Possums, because the selectivity of Possums means that only a few of them will gravitate towards the culture, while nearly any Otter will like it. Gradually the community grows diluted until some Otters start entering who don’t share the Possum goals even a little bit – or even start inviting Possum friends with rival goals. (e.g., members who actively dislike rationality practices in the rationality server).

  5. Possums realize the community culture is not what it used to be and not what they wanted, so they try to moderate. The mods might just kick and ban those farthest from community culture, but more frequently they’ll try to dampen the blow and subsequent outrage by using a constitution, laws, and removal process, usually involving voting and way too much discussion.

  6. The Otters like each other, and kicking an Otter makes all of the other Otters members really unhappy. There are long debates about whether or not what the Possum moderator did was the Right Thing and whether the laws or constitution are working correctly or whether they should split off and form their own chat room

  7. The new chat room is formed, usually by Otters. Some of the members join both chats, but the majority are split, as the aforementioned debates generated a lot of hostility

  8. Rinse and repeat—

Source: Internet communities: Otters vs. Possums | knowingless

What are microcredentials?

I suppose we should have listened when people told the team I was on at Mozilla time and time again that the name ‘Open Badges’ didn’t work for them. They didn’t seem to get the fact that they could call them anything they liked in their organisations; the important thing was that they aligned with the open standard.

A decade later, and ‘microcredentials’ seems to be one term that’s been adopted, especially towards the formal end of the credentialing spectrum. In this interview, Jackie Pichette, Director of Research and Policy for the Higher Education Quality Council of Ontario, takes a Higher Education-centric look at the landscape.

I may be cynical, but it comes across a lot like “that’s all very well in practice, but what about in theory?"

There’s a lot of confusion around the definition of the microcredential. When my colleagues and I started our research in February 2020, just before the world turned upside down, one of our aims was to help establish some common understanding. We engaged experts and consulted literature from around the world to help us answer questions like, What constitutes a microcredential? How is a microcredential different from a digital badge or a certificate?

We landed on an umbrella definition of programs focused on a discrete set of competencies (i.e., skills, knowledge, attributes) that, by virtue of having a narrow focus, require less time to obtain than traditional credentials. We also came up with a typology to show the variation in this definition. For example, microcredentials can be self-paced to accommodate individual schedules, can follow a defined schedule or feature a mix of fixed- and self-paced elements.

Source: How Do Microcredentials Stack Up? Part 1 | The EvoLLLution

Walking the Covid tightrope 

I’m sharing this article mainly for the genius of the accompanying illustration, although it also does a good job of trying to explain an increasing feeling of English exceptionalism.

The results look increasingly alarming. In pubs, in shops, on public transport and in other enclosed spaces where the virus easily spreads, many people are acting as if the pandemic is over – or at least, over for them. Mask-wearing and social distancing have sometimes become so rare that to practise them feels embarrassing.

Meanwhile, England has become one of the worst places for infections in the world, despite a high degree of vaccination by global standards. Case numbers, hospitalisations and deaths are all rising, and are already much higher than in other western European countries that have kept measures such as indoor mask-wearing compulsory, and where compliance with such rules has remained strong. What does England’s failure to control the virus through “personal responsibility” say about our society?

It’s tempting to start by generalising about national character, and how the supposed individualism of the English has become selfishness after half a century of frequent rightwing government and fragmentation in our lives and culture. There may be some truth in that. But national character is not a very solid concept, weakened by all the differences within countries and all the similarities that span continents. Thanks to globalisation, all European societies have been affected by the same atomising forces. England’s lack of altruism during the pandemic can’t just be blamed on neoliberalism.

Other elements of our recent history may also explain it. England likes to think of itself as a stable country, yet since the 2008 financial crisis it has endured a more protracted period of economic, social and political turmoil than most European countries. The desire to return to some kind of normality may be especially strong here; taking proper anti-Covid precautions would be an acknowledgement that we cannot do that.

Source: With Covid infections rising, the Tories are conducting a deadly social experiment | The Guardian

Kith and kin

This is a great article about how the internet was going to save us from TV and now we’re looking for something to save us from the internet. What we actually need are stronger and deeper relationships with the people around us — our kith and kin.

We are conditioned to care about kin, to take life’s meaning from the relationships with those we know and love. But the psychological experience of fame, like a virus invading a cell, takes all of the mechanisms for human relations and puts them to work seeking more fame. In fact, this fundamental paradox—the pursuit through fame of a thing that fame cannot provide—is more or less the story of Donald Trump’s life: wanting recognition, instead getting attention, and then becoming addicted to attention itself, because he can’t quite understand the difference, even though deep in his psyche there’s a howling vortex that fame can never fill.

This is why famous people as a rule are obsessed with what people say about them and stew and rage and rant about it. I can tell you that a thousand kind words from strangers will bounce off you, while a single harsh criticism will linger. And, if you pay attention, you’ll find all kinds of people—but particularly, quite often, famous people—having public fits on social media, at any time of the day or night. You might find Kevin Durant, one of the greatest basketball players on the planet, possibly in the history of the game—a multimillionaire who is better at the thing he does than almost any other person will ever be at anything—in the D.M.s of some twenty something fan who’s talking trash about his free-agency decisions. Not just once—routinely! And he’s not the only one at all.

There’s no reason, really, for anyone to care about the inner turmoil of the famous. But I’ve come to believe that, in the Internet age, the psychologically destabilizing experience of fame is coming for everyone. Everyone is losing their minds online because the combination of mass fame and mass surveillance increasingly channels our most basic impulses—toward loving and being loved, caring for and being cared for, getting the people we know to laugh at our jokes—into the project of impressing strangers, a project that cannot, by definition, sate our desires but feels close enough to real human connection that we cannot but pursue it in ever more compulsive ways.

Source: On the Internet, We’re Always Famous | The New Yorker

Bring Your Own Stack

Venture Capitalists inhabit a slightly different world than the rest of us. This post, for example, paints a picture of a future that makes sense to people deeply enmeshed in Fintech, but not for those of us outside of that bubble.

That being said, there’s a nugget of truth in there about the need for more specific services for particular sectors, rather than relying on generic ones provides by Big Tech.

However, the chances are that those will simply plug in to existing marketplaces (e.g. Google Workplace) rather than strike out on their own. But, what do I know?

There’s a pressing need — and an opportunity — to build vertical-specific tools for workers striking out on their own. Much has been written about the proliferation of vertical software tools that help firms run their businesses, but the next generation of great companies will provide integrated, vertical software for individuals going solo.

Solo workers venturing out on their own need to feel like they can replace the support of a company model. Traditionally, the firm brings three things to support the core craft or product:

  • Operational support: functions like finance, legal, and HR that help people do their jobs
  • Demand: generating customers (through marketing/sales, branding, and relationships)
  • Networks: access to communities that support the individual
The solo stacks of the future will offer a mix of these three things (depending on what makes sense for any industry), giving workers the tools — and thus, the confidence — to leave their jobs. The software will be vertical-specific, as well, as lawyers, personal trainers, money managers, and graphic designers all need different tools, have different customers to market to, and require access to different networks to do their jobs.
Source: As More Workers Go Solo, the Software Stack Is the New Firm | Future

Fall Regression

I’ve only just discovered the writing of Anne Helen Petersen, via one of the many newsletters and feeds to which I subscribe. I featured her work last week about remote working.

Petersen’s newsletter is called Culture Study and the issue that went out yesterday was incredible. She talks about this time of year ⁠⁠— a time I struggle with in particular — and gets right to the heart of the issue.

I’ve learned to take Vitamin D, turn on my SAD light, and to go easy on myself. But there’s always a little voice suggesting that this is how it’s going to be from here on out. So it’s good to hear what other people advise. For Petersen, it’s community involvement.

A teacher recently told me that there’s a rule in her department: no major life decisions in October. The same holds true, she said, for March. But March is well-known for its cruelty. I didn’t realize it was the same for October, even though it makes perfect sense: the charge of September, those first golden days of Fall, the thrill of wearing sweaters for the first time, those are gone. Soon it’ll be Daylight Savings, which always feels like having the wind knocked out of the day. People in high elevations are already showing off their first blasts of snow. We have months, months, to go.

As distractions fade, you’re forced to sit with your own story of how things are going. Maybe you’d been bullshitting yourself for weeks, for months. It was easy to ignore my bad lunch habits when I was spending most of the day outside. Now it’s just me and my angry stomach and scraping the tub of the hummus container yet again. Or, more seriously: now it’s just me swimming against the familiar tide of burnout, not realizing how far it had already pulled me from shore.

[…]

Is this the part of the pandemic when we’re happy? When we’re angry? When we’re hanging out or pulling back, when we’re hopeful or dismayed, when we’re making plans or canceling them? The calendar moves forward but we’re stuck. In old patterns, in old understandings of how work and our families and the world should be. That’s the feeling of regression, I think. It’s not that we’re losing ground. It’s that we were too hopeful about having gained it.

Source: What’s That Feeling? Oh, It’s Fall Regression | Culture Study

Reducing long-distance travel

I agree with what Simon Jenkins is saying here about focusing on the ‘reduce’ part of sustainable travel. However, it does sound a bit like victim-blaming to say that people outside of London travel mainly by car.

We travel primarily by car because of the lack of other options. Infrastructure is important, including outside of our capital city.

It is an uncomfortable fact that most people outside London do most of their motorised travel by car. The answer to CO2 emissions is not to shift passengers from one mode of transport to another. It is to attack demand head on by discouraging casual hyper-mobility. The external cost of such mobility to society and the climate is the real challenge. It cannot make sense to predict demand for transport and then supply its delivery. We must slowly move towards limiting it.

One constructive outcome of the Covid pandemic has been to radically revise the concept of a “journey to work”. Current predictions are that “hybrid” home-working may rise by as much as 20%, with consequent cuts in commuting travel. Rail use this month remains stubbornly at just 65% of its pre-lockdown level. Office blocks in city centres are still half-empty. Covid plus the digital revolution have at last liberated the rigid geography of labour.

Climate-sensitive transport policy should capitalise on this change. It should not pander to distance travel in any mode but discourage it. Fuel taxes are good. Road pricing is good. So are home-working, Zoom-meeting (however ghastly for some), staycationing, local high-street shopping, protecting local amenities and guarding all forms of communal activity.

Source: Train or plane? The climate crisis is forcing us to rethink all long-distance travel | The Guardian

Time millionaires

Same idea, new name: there’s nothing new about the idea of prioritising the amount of time and agency you have over the amount of money you make.

It’s just that, after the pandemic, more people have realised that chasing money is a fool’s errand. So, whatever you call it, putting your own wellbeing before the treadmill of work and career is always a smart move.

First named by the writer Nilanjana Roy in a 2016 column in the Financial Times, time millionaires measure their worth not in terms of financial capital, but according to the seconds, minutes and hours they claw back from employment for leisure and recreation. “Wealth can bring comfort and security in its wake,” says Roy. “But I wish we were taught to place as high a value on our time as we do on our bank accounts – because how you spend your hours and your days is how you spend your life.”

And the pandemic has created a new cohort of time millionaires. The UK and the US are currently in the grip of a workforce crisis. One recent survey found that more than 56% of unemployed people were not actively looking for a new job. Data from the Office for National Statistics shows that many people are not returning to their pre-pandemic jobs, or if they are, they are requesting to work from home, clawing back all those hours previously lost to commuting.

Source: Time millionaires: meet the people pursuing the pleasure of leisure | The Guardian

On the digital literacies of regular web users

Terence Eden opened a new private browsing window and started typing “https…” and received the results of lots of different sites.

He uses this to surmise, and I think he’s probably correct, that users conflate search bars and address bars. Why shouldn’t they? They’ve been one and the same thing in browsers for years now.

Perhaps more worrying is that there’s a whole generation of students who don’t know what a file system structure is…

There are a few lessons to take away from this.
  • Users don't really understand interfaces
  • Computers don't really understand users
  • Big Data assumes that users are behaving in semi-rational manner
Source: Every search bar looks like a URL bar to users | Terence Eden’s Blog

Leisure is what we do for its own sake. It serves no higher end.

Yes, yes, and yes. I agree wholeheartedly with this view that places human flourishing above work.

To limit work’s negative moral effects on people, we should set harder limits on working hours. Dr. Weeks calls for a six-hour work day with no pay reduction. And we who demand labor from others ought to expect a bit less of people whose jobs grind them down.

In recent years, the public has become more aware of conditions in warehouses and the gig economy. Yet we have relied on inventory pickers and delivery drivers ever more during the pandemic. Maybe compassion can lead us to realize we don’t need instant delivery of everything and that workers bear the often-invisible cost of our cheap meat and oil.

The vision of less work must also encompass more leisure. For a time the pandemic took away countless activities, from dinner parties and concerts to in-person civic meetings and religious worship. Once they can be enjoyed safely, we ought to reclaim them as what life is primarily about, where we are fully ourselves and aspire to transcendence.

Leisure is what we do for its own sake. It serves no higher end.

Source: Returning to the Office and the Future of Work | The New York Times

UK government adviser warns against plans to force the NHS to share data with police forces

It’s entirely unsurprising that governments should seek to use the pandemic as cover for hoovering up data about its citizens. However, it’s up to us to resist this.

Plans to force the NHS to share confidential data with police forces across England are “very problematic” and could see patients giving false information to doctors, the government’s data watchdog has warned.

[…]

Dr Nicola Byrne also warned that emergency powers brought in to allow the sharing of data to help tackle the spread of Covid-19 could not run on indefinitely after they were extended to March 2022.

Dr Byrne, 46, who has had a 20-year career in mental health, also warned against the lack of regulation over the way companies were collecting, storing and sharing patient data via health apps.

She told The Independent she had raised concerns with the government over clauses in the Police, Crime, Sentencing and Courts Bill which is going through the House of Lords later this month.

The legislation could impose a duty on NHS bodies to disclose private patient data to police to prevent serious violence and crucially sets aside a duty of confidentiality on clinicians collecting information when providing care.

Dr Byrne said doing so could “erode trust and confidence, and deter people from sharing information and even from presenting for clinical care”.

She added that it was not clear what exact information would be covered by the bill: “The case isn’t made as to why that is necessary. These things need to be debated openly and in public.”

Source: Plans to hand over NHS data to police sparks warning from government adviser | The Independent

Sports data and GDPR

This is really quite fascinating. The use of player data has absolutely exploded in the last decade, and that's now being challenged from a GDPR (i.e. data privacy) point of view.

Some of it could be said to be reasonably innocuous, but when we get into the territory of players being compared against 'expected goals' things start to get tricky, I'd suggest.

Slade's legal team said the fact players receive no payment for the unlicensed use of their data contravenes General Data Protection Regulation (GDPR) rules that were strengthened in 2018.

Under Article 4 of the GDPR, "personal data" refers to a range or identifiable information, such as physical attributes, location data or physiological information.

BBC News understands that an initial 17 major betting, entertainment and data collection firms have been targeted, but Slade's Global Sports Data and Technology Group has highlighted more than 150 targets it believes have misused data.

[...]

Former Wales international Dave Edwards, one the players behind the move, said it was a chance for players to take more control of the way information about them is used.

Having seen how data has become a staple part of the modern game, he believes players rights to how information about them is used should be at the forefront of any future use.

"The more I've looked into it and you see how our data is used, the amount of channels its passed through, all the different organisations which use it, I feel as a player we should have a say on who is allowed to use it," he said.

Source: Footballers demand compensation over 'data misuse' | BBC News

Precrastinators, procrastinators, and originals

A really handy TED talk focusing on ‘precrastinators’ (with whom I definitely identify) and how they differ from procrastinators and what Grant calls ‘originals’ in terms of creativity.

(I always watch these kinds of things at 1.5x speed, but Adam Grant already talks quickly!)

[embed]www.youtube.com/watch

Source: The surprising habits of original thinkers | Adam Grant

Why commute to an office to work remotely?

This piece by Anne Helen Petersen is so good about the return to work. It’s ostensibly about US universities, but is so much widely applicable.

As I’ve said to several people over the past few weeks, the idea of needing staff to be in a physical office most of the time for ‘serendipitous interactions’ is ridiculous. Working openly allows for much greater serendipity surface than any forced physical co-location might achieve.

On college campuses across the United States, staff are back in the office. More specifically, they’re back in their own, individual offices, with their doors closed, meeting with one another over Zoom or Teams, battling low internet speeds, and reminding each other to mute themselves so that the sound of the meeting doesn’t create a deafening echo effect for everyone else.

For some, the office is just a quick walk or bike ride away. But for many, coming into the office requires a distinctly unromantic commute. It means cobbling together childcare plans, particularly with the nationwide bus driver shortages and school quarantine regulations after illness or a potential exposure. It means paying for parking, and packing or paying for their lunches, and handing over anywhere from 20 minutes to two hours of their day. They are enduring the worst parts of a “traditional” job, only to go into the office and essentially work remote, with worse conditions and fewer amenities (and, in many cases, less comfort) than they had at home. It’s the worst of both work worlds.

[…]

The university might seem like a weird example of an “office,” but it’s a pretty vivid illustration of one. You have leadership who are obsessed with image, cost cutting, and often deeply out of touch with the day-to-day operations of the organization (administration); a group of “creatives” (tenured faculty) who form the outward core of the organization and thus have significant self-import but dwindling power; full-time employees of various levels who are fundamental to the operation of the organization and chronically under-appreciated (staff) ; an underclass of contingent and contract workers who perform similar jobs to full-time employees but for less pay, fewer protections, less job security, and are held in far less esteem (grad students, adjuncts, and sub-contracted staff, including building, maintenance, food service, security). And then there’s the all-important customer, whose imagined needs, preferences, whims, demands, and supply of capital serve are the axis around which the rest of the organization rotates (students and their parents).

Source: The Worst of Both Work Worlds | Culture Study

On 'sportswashing'

There has been a lot written and recorded already about Newcastle United, my geographically-closest Premier League football team, and the rival of the team I actually support (Sunderland).

I am certainly sympathetic to the idea that individual people should live their values. But there has to be a line drawn somewhere. For example, I really like the music of the artist Morrissey, yet I think some of his politics and other views are distasteful and problematic.

Likewise, when the sovereign wealth fund of a foreign power provides your football team with untold riches, why shouldn’t you celebrate? While I’d love to live in a world where fans own football clubs (see AFC Wimbledon) as the article points out, this purchase needs to be placed in a wider narrative around Brexit and widening inequalities in society.

St James Park, Newcastle
You might expect that this would be controversial in Newcastle. This is not any old country buying an English soccer club. It is a country run by the man the United States concluded to have ordered the dismembering of a journalist, a country conducting a brutal war in Yemen that is among the most barbarous in the world.

And yet, few in Newcastle seem to care. I mean, why should they? Their rivals in the English Premier League are already owned by some pretty unpleasant regimes or people: Manchester City is controlled by Abu Dhabi, and Chelsea by a Russian oligarch with ties to the Kremlin. What’s the point in turning down someone’s money if nobody else is? The fixer who facilitated the Saudi takeover has, incredibly, insisted that the Saudi state was not taking over Newcastle’s soccer club, but rather its sovereign wealth fund, which, the fixer said, genuinely cared about human rights. Both, of course, are run by Crown Prince Mohammed bin Salman.

Beyond this cynical piece of performance art, however, the Newcastle United sale is emblematic of something far more fundamental and depressing about the state of Britain.

Source: Britain’s Distasteful Soccer Sellout | The Atlantic

On the dangers of CBDCs

I can’t remember the last time I used cash. Or rather, I can (for my son’s haircut) because it was so unusual; it’s been about 18 months since my default wasn’t paying via the Google Pay app on my smartphone.

As a result, and because I also have played around with buying, selling, and holding cryptocurrencies, that a Central Bank Digital Currency (CBDC) would be a benign thing. Sadly, as Edward Snowden explains, they really are not. His latest article is well worth a read in its entirety.

Rather, I will tell you what a CBDC is NOT—it is NOT, as Wikipedia might tell you, a digital dollar. After all, most dollars are already digital, existing not as something folded in your wallet, but as an entry in a bank’s database, faithfully requested and rendered beneath the glass of your phone.

Neither is a Central Bank Digital Currency a State-level embrace of cryptocurrency—at least not of cryptocurrency as pretty much everyone in the world who uses it currently understands it.

Instead, a CBDC is something closer to being a perversion of cryptocurrencyor at least of the founding principles and protocols of cryptocurrency—a cryptofascist currency, an evil twin entered into the ledgers on Opposite Day, expressly designed to deny its users the basic ownership of their money and to install the State at the mediating center of every transaction.

Source: Your Money and Your Life - by Edward Snowden - Continuing Ed — with Edward Snowden

Subsidising trains via a tax on internal flights?

My wife flew down to a work meetup (and to see her family) last week. She got the train back. The flight was about £40, and the train about five times that.

At around seven hours, that journey would have been exempt from these plans, but it’s illustrative of how passengers are currently economically encouraged to destroy the environment.

The Campaign for Better Transport (CBT) called on ministers to outlaw internal UK flights if an equivalent train journey took less than five hours and to resist calls for any cut in air passenger duty.

Mandatory emissions labels on tickets and a frequent flyer levy should also be introduced, the charity said.

The demands came before the 27 October budget, in which the chancellor, Rishi Sunak, may decide to cut taxes on domestic flights in response to pressure from the aviation industry, a possibility mooted by the prime minister earlier this year. Such a move could, however, prove an embarrassment a week before the UK hosts the Cop26 climate conference in Glasgow.

[…]

Paul Tuohy, the chief executive of CBT said: “Cheap domestic flights might seem a good deal when you buy them, but they are a climate disaster, generating seven times more harmful greenhouse emissions than the equivalent train journey.

“Making the ​train cheaper will boost passenger numbers and help reduce emissions from aviation, but any cut to air passenger duty – coupled with a rise in rail fares in January – will send the wrong message about how the government wants people to travel and mean more people choosing to fly.”

Source: Ban UK domestic flights and subsidise rail travel, urges transport charity | The Guardian

Subsidising trains via a tax on internal flights?

My wife flew down to a work meetup (and to see her family) last week. She got the train back. The flight was about £40, and the train about five times that.

At around seven hours, that journey would have been exempt from these plans, but it’s illustrative of how passengers are currently economically encouraged to destroy the environment.

The Campaign for Better Transport (CBT) called on ministers to outlaw internal UK flights if an equivalent train journey took less than five hours and to resist calls for any cut in air passenger duty.

Mandatory emissions labels on tickets and a frequent flyer levy should also be introduced, the charity said.

The demands came before the 27 October budget, in which the chancellor, Rishi Sunak, may decide to cut taxes on domestic flights in response to pressure from the aviation industry, a possibility mooted by the prime minister earlier this year. Such a move could, however, prove an embarrassment a week before the UK hosts the Cop26 climate conference in Glasgow.

[…]

Paul Tuohy, the chief executive of CBT said: “Cheap domestic flights might seem a good deal when you buy them, but they are a climate disaster, generating seven times more harmful greenhouse emissions than the equivalent train journey.

“Making the ​train cheaper will boost passenger numbers and help reduce emissions from aviation, but any cut to air passenger duty – coupled with a rise in rail fares in January – will send the wrong message about how the government wants people to travel and mean more people choosing to fly.”

Source: Ban UK domestic flights and subsidise rail travel, urges transport charity | The Guardian

Opting out of capitalism

One of the huge benefits of the pandemic has been that it’s allowed people to reflect on their lives. And many people, it seems, realised that their jobs (or work in general) makes them unhappy.

The lying flat movement, or tangping as it’s known in Mandarin, is just one expression of this global unraveling. Another is the current worker shortage in the United States. As of June, there were more than 10 million job openings in the United States, according to the most recent figures from the Labor Department — the highest number since the government began tracking the data two decades ago. While conservatives blame juiced-up pandemic unemployment benefits, liberals counter that people do want to work, just not for the paltry wages they were making before the pandemic.

Both might be true. But if low wages were all that’s at play, we would expect to see reluctant workers at the bottom of the socioeconomic ladder, and content workers at the top. Instead, there are murmurs of dissent at every rung, including from the inner sanctums of Goldman Sachs, where salaries for investment bankers start at $150,000. According to a leaked internal survey, entry-level analysts at the investment bank report they’re facing “inhumane” conditions, working an average of 98 hours a week, forgoing showers and sleep. “I’ve been through foster care,” said one respondent. “This is arguably worse.”

Source: Lying Flat': Tired Workers Are Opting Out of Careers and Capitalism | The New York Times

Blissed, Blessed, Pissed, and Dissed

Austin Kleon summarises Bill O’Hanlon’s idea around there being ‘four energies’ that writers can dig into. They may need translating for a British audience (‘pissed’ means something different over here…) but I like it as an organising idea.

Related: Buster Benson’s ‘Seven Modes (for seven heads)’ from his seminal post Live like a hydra.

The energies are split between “what you love and what upsets you”:
  • “Blissed” energy comes from what you’re on fire for and can’t stop doing.
  • “Blessed” means you’ve been gifted something that you feel compelled to share.
  • “Pissed” means you’re pissed off or angry about something.
  • “Dissed” means you feel “dissatisfied or disrespected.”
O’Hanlon goes on to say many of his early books were “written from a combination of pissed and blissed.”
Source: The Four Energies | Austin Kleon

The Stability Fantasy

The last time I was in LA, I hired a Dodge Charger and navigated the huge freeways meeting a client and then visiting a friend. I remember going for a fabled In-N-Out burger and seeing the sky turn orange due to Californian wildfires.

I took a photo, ate my burger, and got back in the car. It’s amazing how quickly we normalise quite extreme things in our lives. Since then, my understanding, awareness, and action around the climate emergency has changed dramatically. But that’s taken five years, and we haven’t got time for everyone to come to their own epiphany; the world is on fire.

The great irony of climate change is that, even though it is now occurring at an incomprehensibly rapid pace from a geologic perspective, it is still moving too slowly for humans to understand it as the crisis that it is. Few of us are geologists, and thinking like one is easier said than done.

I think this is why there haven’t been more successful films about climate change. We love movies about existential threats—mainly aliens—but in those stories individual characters make decisions to deal with the crisis within a couple of weeks. One of the few blockbuster films to deal directly with climate change, The Day After Tomorrow, imagined an Ice Age apocalypse that settles over Earth in a matter of days. Climate scientists rightfully criticized the movie, but I think it says something profound about the climate problem: Unless we unreasonably turn up the speed dial, we are incapable of fitting climate change into the kind of narrative that human beings are used to processing.

And yet, here we are, causing one of the fastest shifts the planet has ever experienced. The sheer pace of change playing out right now is making it harder for us to maintain our myth of a stable planet. The stability fantasy is beginning to crumble.

Source: The Stability Fantasy | Orion Magazine

Singapore is turning into a dystopian surveillance state

Well, this is concerning. Especially given governments' love for authoritarian technologies and copying one another’s surveillance practices.

Singapore surveillance robot

Singapore has trialled patrol robots that blast warnings at people engaging in “undesirable social behaviour”, adding to an arsenal of surveillance technology in the tightly controlled city-state that is fuelling privacy concerns.

From vast numbers of CCTV cameras to trials of lampposts kitted out with facial recognition tech, Singapore is seeing an explosion of tools to track its inhabitants.

[…]

The government’s latest surveillance devices are robots on wheels, with seven cameras, that issue warnings to the public and detect “undesirable social behaviour”.

This includes smoking in prohibited areas, improperly parking bicycles, and breaching coronavirus social-distancing rules.

During a recent patrol, one of the “Xavier” robots wove its way through a housing estate and stopped in front of a group of elderly residents watching a chess match.

“Please keep one-metre distancing, please keep to five persons per group,” a robotic voice blared out, as a camera on top of the machine trained its gaze on them.

Source: ‘Dystopian world’: Singapore patrol robots stoke fears of surveillance state | Singapore | The Guardian

Good decision-making

Some useful advice from Ed Batista about the difference between ‘good decision-making’ and ‘making the right decision’.

I believe the path to getting unstuck when faced with a daunting, possibly paralyzing decision... involves a fundamental re-orientation of our mindset: Focusing on the choice minimizes the effort that will inevitably be required to make any option succeed and diminishes our sense of agency and ownership. In contrast, focusing on the effort that will be required after our decision not only helps us see the means by which any choice might succeed, it also restores our sense of agency and reminds us that while randomness plays a role in every outcome, our locus of control resides in our day-to-day activities more than in our one-time decisions.

So while I support using available data to rank our options in some rough sense, ultimately we’re best served by avoiding paralysis-by-analysis and moving forward by:

  1. paying close attention to the feelings and emotions that accompany the decision we’re facing,
  2. assessing how motivated we are to work toward the success of any given option, and
  3. recognizing that no matter what option we choose, our efforts to support its success will be more important than the initial guesswork that led to our choice.
This view is consistent with the work of Stanford professor Baba Shiv, an expert in the neuroscience of decision-making. Shiv notes that in the case of complex decisions, rational analysis will get us closer to a decision but won’t result in a definitive choice because our options involve trading one set of appealing outcomes for another, and the complexity of each scenario makes it impossible to determine in advance which outcome will be optimal.
Source: Stop Worrying About Making the Right Decision | Ed Batista

Carbon offsets are pure greenwashing

Having travelled here, there, and everywhere by air for both personal and professional business over the last decade, it took me too long to realise the scale of the climate emergency.

When I did, I looked into climate offsets, but found that they’re hugely problematic, and often a scam. That’s why I’m not flying any more. It’s good to hear Greenpeace’s Executive Director Jennifer Morgan come out so strongly against them, and put pressure back on the fossil fuel industry.

Carbon offsets are allowing the world's biggest polluters to forge ahead with business plans that are threatening global climate goals, the head of Greenpeace International said in an interview.

The model allows polluting companies to offset their emissions by buying credits from projects that reduce or avoid the release of climate-warming CO2 elsewhere, such as mass tree plantings or solar power farms - which could be worth $50 billion by 2030 according to a task force created to scale up the market.

Environmental advocates such as Greenpeace say this is allowing big emitters like oil majors to put off cutting their own emissions and avoid divesting from hydrocarbons, a primary source of greenhouse gases that cause global warming.

“There’s no time for offsets. We are in a climate emergency and we need phasing out of fossil fuels,” Greenpeace’s Executive Director Jennifer Morgan said at the Reuters Impact conference.

She said one issue with planting trees as offsets was that it takes 20 years for trees to grow and offset emissions happening right now. In the interim wildfires could destroy the chance of reductions."

These offsetting schemes … are pure ‘greenwash’ so that the companies, oil companies, can continue to do what they’ve been doing and make a profit," she said.

Source: Greenpeace calls for end to carbon offsets | Reuters

Six Causes of Burnout at Work

This is an interesting article from UC Berkley’s Greater Good Magazine based on journalist Jennifer Moss' new book The Burnout Epidemic: The Rise of Chronic Stress and How We Can Fix It. It not only talks about organisational factors, but personality types as well.

1. Workload. Overwork is a main cause of burnout. Working too many hours is responsible for the deaths of millions of people every year, likely because overwork makes people suffer weight loss, body pain, exhaustion, high levels of cortisol, sleep loss, and more.

2. Perceived lack of control. Studies show that autonomy at work is important for well-being, and being micromanaged is particularly de-motivating to employees. Yet many employers fall back on watching their employees’ every move, controlling their work schedule, or punishing them for missteps.

3. Lack of reward or recognition. Paying someone what they are worth is an important way to reward them for their work. But so is communicating to people that their efforts matter.

4. Poor relationships. Having a sense of belonging is necessary for mental health and well-being. This is true at work as much as it is in life. When people feel part of a community, they are more likely to thrive. As a Gallup poll found, having social connections at work is important. “Employees who have best friends at work identify significantly higher levels of healthy stress management, even though they experience the same levels of stress,” the authors write.

5. Lack of fairness. Unfair treatment includes “bias, favoritism, mistreatment by a coworker or supervisor, and unfair compensation and/or corporate policies,” writes Moss. When people are being treated unjustly, they are likely to burn out and need more sick time.

6. Values mismatch. “Hiring someone whose values and goals do not align with the values and goals of the organization’s culture may result in lower job satisfaction and negatively impact mental health,” writes Moss. It’s likely that someone who doesn’t share in the organization’s mission will be unhappy and unproductive, too.

Source: Six Causes of Burnout at Work | Greater Good

Facebook isn't just anti-competitive, it's anti-consumer

I can’t quite understand why people still use Facebook’s services, other than vendor lock-in?

The tool I created, a browser extension called Unfollow Everything, allowed users to delete their News Feed by unfollowing their friends, groups, and pages. The News Feed, as users of Facebook know, is that never-ending page that greets you when you log in. It’s the central hub of Facebook. It’s also a major source of revenue. As a Facebook whistleblower observed on 60 Minutes on Sunday, time spent on the platform translates to ads viewed and clicked on, which in turn translates to billions of dollars for Facebook. The News Feed is the thing that keeps people glued to the platform for hours on end, often on a daily basis; without it, time spent on the network would drop considerably.

[…]

Facebook’s behavior isn’t just anti-competitive; it’s anti-consumer. We are being locked into platforms by virtue of their undeniable usefulness, and then prevented from making legitimate choices over how we use them—not just through the squashing of tools like Unfollow Everything, but through the highly manipulative designs and features platforms adopt in the first place. The loser here is the user, and the cost is counted in billions of wasted hours spent on Facebook.

Source: Facebook banned me for life because I created the tool Unfollow Everything | Slate

Traffic to news sites went up during the Facebook outage.

It’s really problematic that most people get their news via algorithmic news feeds.

On August 3, 2018, Facebook went down for 45 minutes. That’s a little baby outage compared to the one this week, when, on October 4, Facebook, Instagram, and WhatsApp were down for more than five hours. Three years ago, the 45-minute Facebook break was enough to get people to go read news elsewhere, Chartbeat‘s Josh Schwartz wrote for us at the time.

So what happened this time around? For a whopping five-hours-plus, people read news, according to data Chartbeat gave us this week. (And they went to Twitter; Chartbeat saw Twitter traffic up 72%. If Bad Art Friend had been published on the same day as the Facebook outage, Twitter would have literally exploded, presumably.)

Source: When Facebook went down this week, traffic to news sites went up » Nieman Journalism Lab

Who wants a metaverse created by Facebook?

No-one.

Facebook is nearing a reputational point of no return. Even when it set out plausible responses to Ms Haugen, people no longer wanted to hear. The firm risks joining the ranks of corporate untouchables like big tobacco. If that idea takes hold, Facebook risks losing its young, liberal staff. Even if its ageing customers stick with the social network, Facebook has bigger ambitions that could be foiled if public opinion continues to curdle. Who wants a metaverse created by Facebook? Perhaps as many people as would like their health care provided by Philip Morris.
Source: Facebook is nearing a reputational point of no return | The Economist

Microcast #095 — Rewilding your serendipity surface


Attention, Big Tech, and choosing to curate rather than be curated.

Show notes

See also: Fraidycat and Rewilding Your Attention (Read Write Collect)


Image: Pexels

Background music: Shimmers by Synth Soundscapes (aka Mentat)

Microcast #094 — Solarpunk vs technocratic pharaohs

Overview

A thematic look at sustainable futures, from equitable approaches to chimeric fetuses and phallic spaceships.

Show notes

See also: Bright green, blight green, and lean green futures (Open Thinkering)


Image: Solarpunk Flag by @Starwall@radical.town

Background music: Shimmers by Synth Soundscapes (aka Mentat)

Microcast #093 — Boring hot dogs

Overview

Everything from life-shortening foods to Twitter's attempt to control feuds.

Show notes


Image via Pexels

Background music: Shimmers by Synth Soundscapes (aka Mentat)

Microcast #092 — Drinking in the sunlight

Overview

Another eclectic mix of articles, from Apple to alcohol.

Show notes


Image via Pexels

Background music: Shimmers by Synth Soundscapes (aka Mentat)

Microcast #091 — Arguing in circles

Overview

An eclectic mix of articles in today's microcast, covering everything from teens and tech to Fediverse functionality.

Show notes


Image via Pexels

Background music: Shimmers by Synth Soundscapes (aka Mentat)

Microcast #090 — Doing what you love in an angry world

Overview

I try and spot a theme between the three articles I pick out. Today's is something around (negative) emotions and getting on (well) with others.

Show notes


Image: Nick Fewings

Background music: Shimmers by Synth Soundscapes (aka Mentat)

Microcast #089 — Circumvention

Overview

In this microcast I discuss three articles loosely related to censorship and the circumvention thereof.

Show notes


Image: Michael Dziedzic

Background music: Shimmers by Synth Soundscapes (aka Mentat)

Microcast #088 — Spontaneous fluctuations

Overview

In which I pick another three interesting items from my bookmarks to discuss.

Show notes


Image: Richard Horvath

Background music: Shimmers by Synth Soundscapes (aka Mentat)

Microcast #087 — Back in the game!

Overview

It's been a long time since the last microcast, but they're back! Comments? Questions? Add them below!

Show notes


Image: Erik McClean

Background music: Shimmers by Synth Soundscapes (aka Mentat)

How long before everyone's using decentralised messengers?

I first experimented with Linux in 1997. It wasn't until 20 years later that I was running it as my default operating system.

I hope it doesn't take as long for something like Briar to be my default messaging app! It's difficult to make the case for it when everyone's got WhatsApp, Signal, Telegram, or the like.

But the radical, decentralised, approach to privacy that Briar takes is refreshing.

Wildire

Another potential use case scenario for Briar are natural disasters. With the climate crisis getting worse day by day, destruction of critical infrastructure is a problem affecting more and more parts of the world, as the recent floods in Europe and China and the wildfires all around the world have shown.

While Briar can definitively be useful in those situations, its trade-offs in favor of privacy are severely limiting its connectivity capabilities. To make an example, imagine your city just got nearly extinguished by a wildfire, destroying all the telecommunications infrastructure that was once there. Fortunately, you and your friends got Briar installed, so when a friend of you drops by you grasp at the chance and write messages to all your friends in-town. One could think that all those messages get synchronized to your friend’s device, so she can serve as a carrier for your other friends' messages. Unfortunately, that’s not how Briar works.

As I’ve outlined before, metadata protection is one of Briar’s primary goals. Therefore, Briar doesn’t synchronize messages to your friend Alice with Bob when you meet him in order to not let Bob know that you’re communicating with Alice. This is very useful when you can’t trust even your contacts not to be spying on you, but it’s most likely a huge problem when connectivity is all you want in the face of natural disasters.

This message routing scheme used by Briar is called “single-hop social mesh” because you only ever send messages to your contacts if you have a direct connection to them. During catastrophes you most likely want to have at least “multi-hop social mesh” or yet even better “public mesh” where you share messages not only with your contacts but with anybody using Briar. However, as connectivity improves, privacy gets worse because people will know when you’re communicating with whom.

The good news are that Briar is currently receiving funding to conduct research on supporting other types of mesh. Still it will take a lot of time until something gets implemented in Briar, so all of this should be considered long-term perspectives. Note, though, that this mainly affects private chats and private groups. If you and all your friends are part of a forum (Briar’s “public” version of group chats), Alice will indeed serve as a carrier for your messages sent to that forum.

Source: Confronting Briar with disasters | Nico Alt

Moral outrage and social media

I’ve largely quit Twitter these days, mainly because the social network I joined in 2007 turned into a rage machine sometime in the last 5-10 years. I suspect it had something to do with their IPO in 2013 and transformation to what I term “software with shareholders”.

This Yale study proves a link between increased outrage and the number of likes and retweets received. But then, we already knew that.

Moral outrage can be a strong force for societal good, motivating punishment for moral transgressions, promoting social cooperation, and spurring social change. It also has a dark side, contributing to the harassment of minority groups, the spread of disinformation, and political polarization, researchers said.

Social media platforms like Facebook and Twitter argue that they merely provide a neutral platform for conversations that would otherwise happen elsewhere. But many have speculated that social media amplifies outrage. Hard evidence for this claim was missing, however, because measuring complex social expressions like moral outrage with precision poses a technical challenge, the researchers said.

To compile that evidence, Brady and Crockett assembled a team which built machine learning software capable of tracking moral outrage in Twitter posts. In observational studies of 12.7 million tweets from 7,331 Twitter users, they used the software to test whether users expressed more outrage over time, and if so, why.

The team found that the incentives of social media platforms like Twitter really do change how people post. Users who received more “likes” and “retweets” when they expressed outrage in a tweet were more likely to express outrage in later posts. To back up these findings, the researchers conducted controlled behavioral experiments to demonstrate that being rewarded for expressing outrage caused users to increase their expression of outrage over time.

Source: ‘Likes’ and ‘shares’ teach people to express more outrage online | YaleNews

Motivating people who don't need a job

There are two kinds of people who don’t need the job you’re providing for them. The first kind is the independently wealthy. The second kind is the person with an in-demand skillset (or rare knowledge/experience).

The last time I was employed, I kept reminding my boss that I came from consulting and I could always go back to it. And that’s what I did. Employers whose main way of motivating employees is to implicitly threaten them with ‘not having a job’ aren’t worth working for.

You should manage all of your employees as if they don’t “need” their jobs and have other options — whether those options are family money or the ability to go out and get another job with their skills.There are two reasons for that:
  1. Assuming you’re hiring good people, it’s very likely they do have other options. It might be a pain for someone to leave and find another job, but generally it’s something people are able to do.

  2. Using someone’s paycheck as your primary leverage might be effective in the very short-term, but it’s rarely a way to build or retain an engaged, invested staff in the long-term.

The way you motivate someone who doesn’t need the money is the same way you should motivate people who do need the money: by giving them meaningful roles with real responsibility where they can see how their efforts contribute to a larger whole, giving them an appropriate amount of ownership over their work and input into decisions that involve that work, providing useful feedback, recognizing their contributions, helping them feel they’re making progress toward things that matter to them, and — importantly — not doing things that de-motivate people (like yelling or constantly shifting goals or generally being a jerk).

Source: how do I manage an employee who doesn’t need the job? | Ask a Manager

100% inheritance tax?

If we can’t stop people raking up ridiculous sums of money, we can definitely prevent them passing on that wealth to their kids. Thankfully, more enlightened rich people (in this case actor Daniel Craig) are already putting their own measures in place.

In a Hollywood interview published this week in Candis magazine, Mr Craig made reference to Andrew Carnegie, the Scottish-born US industrialist and one of the wealthiest men in history.

“Isn’t there an old adage that if you die a rich person, you’ve failed?” he said. “I think Andrew Carnegie gave away what in today’s money would be about $11 billion, which shows how rich he was because I’ll bet he kept some of it too.“

But I don’t want to leave great sums to the next generation. I think inheritance is quite distasteful. My philosophy is: get rid of it or give it away before you go."

Source: ‘Inheritance is distasteful’: Daniel Craig’s children will not be getting his Bond millions | The Telegraph

Culture is in a state of constant flux

My parents, the son of a factory worker and assistant baker and the daughter of domestic servants, were both the first in their families to go to university. As such, they wanted to ensure that their children, my sister and I, knew our way around ‘culture’.

Hence, for me, a childhood punctuated not only piano lessons and visits to National Trust properties but visits to the cheapest seats at the theatre to see ballets and plays. In their mind, at least back then, there was ‘Culture’ (with a capital ‘C’) to which we had to be introduced.

As Kojo Koram from the School of Law at Birkbeck, University of London, writes, however, culture is something that is continually remade by the people living it. These different conceptions mark the boundaries of the culture wars currently being played out in British politics and society.

In the 1960s and 70s, when [Stuart] Hall was writing, most British intellectuals dismissed the new mass culture taking hold in the country as a passing fad that did not deserve the attention given to Shakespeare, Elgar or Hogarth. But Hall recognised how it offered an increasingly multicultural British population the opportunity to interpret and experience life as it was lived on the ground. Rather than seeing culture as something fixed and unchanging that needed constant protection, Hall saw it as something that underwent “constant transformation” and was always being made and remade by the people living it, a moving force that perpetually created new identities.

It is no coincidence that so many of the primary battlegrounds where today’s culture wars are being staged are the elite institutions that represent a traditional British hierarchy: stately homes, Oxford university common rooms, the Last Night of the Proms. To culture warriors on the right, these institutions best represent Britain’s national culture as a whole. That they are exclusive is part of their appeal: when culture is defined as something that only a few people can access or control, its preservation is best entrusted to high-ranking authorities.

Source: Here’s what the right gets wrong about culture: it’s not a monument, but a living thing | The Guardian

The Great Reckoning

When I was a teacher and school senior leader in my twenties I worked all the hours. Not only that, but I was writing my doctoral thesis and we had a young baby. I’ve never worked so hard or be so close to burnout.

Since switching to being based from a home office in 2012 my life has been transformed. With no commute and no planning, preparation, and assessment, I’m paid for the time I actually work. And since 2017 and setting up a co-op, I’m jointly in charge of the means of production as well.

As Cal Newport writes in The New Yorker, others are cottoning-on to these advantages since the pandemic, leading to a wave of resignations.

These people are generally well-educated workers who are leaving their jobs not because the pandemic created obstacles to their employment but, at least in part, because it nudged them to rethink the role of work in their lives altogether. Many are embracing career downsizing, voluntarily reducing their work hours to emphasize other aspects of life.
Words
Many well-compensated but burnt-out knowledge workers have long felt that their internal ledger books were out of balance: they worked long hours, they made good money, they had lots of stuff, they were exhausted, and, above all, they saw no easy options for changing their circumstances. Then came shelter-in-place orders and shuttered office buildings. This particular class of workers were thrown into their own Zoom-equipped versions of Walden Pond. Diversion and entertainment were stripped down to basic forms, and it became difficult to spend more than the cost of a Netflix subscription or batch of sourdough starter to keep occupied. The absence of visits with friends and family reinforced the value of social connection. The unceasing presence of video conferencing and e-mail enhanced the Kafkaesque superfluousness of many of the activities that dominated the pre-pandemic workday. This class of workers was suddenly staring at the proverbial cabin and wondering if a copper pump would really be worth the labor required to cultivate another acre.
Source: Why Are So Many Knowledge Workers Quitting? | The New Yorker

Brains melted like butter in a microwave

This is a really powerful essay about the American response — or lack of it to the news that the Taliban have taken Kabul. The author, Antonio García Martínez, contends that Americans are “no longer a serious people” and spend too much time manufacturing reality.

You see, in the Before Times there was a reality ‘out there’, peoples and cultures unlike ours that stubbornly refused to think and act as we did (and we knew it); facts on the ground that were immune to social-media spirals of bloviation and simply could not be ignored (and we knew it). We grappled with them, debated them, rallied consensus around them, and just dealt with reality however poorly perceived it might have been. And leaders who could not deal with inarguable realities, such as Carter with his botched Iranian rescue operation, did not stay leaders for very long.
The war in Afghanistan cost a trillion dollars over 20 years, thousands of lives, and was ultimately an exercise in futility:
This might seem flip and 'too soon', but the irony highlights the real civilizational difference here: one where combat is via prissy morality and pure spectacle, and one where the battles are literal and deadly. One where elites contest power via spiraling purity and virality contests waged online, and where defeat means ‘cancelation’ or livestreamed ‘struggle sessions’ around often imaginary or minor offenses. And another place where the price of defeat is death, exile, rape, destitution, and fates so grim people die dangling from airplanes in order to escape.

In short, an unserious country mired in the most masturbatory hysterics over bullshit dramas waged war against an insurgency of religious zealots fired by a 7th-century morality, and utterly and totally lost.

And all we can do in the wake of it, with our brains melted like butter in a microwave by four years of Trump and Twitter and everything else, is to once again try and understand in our terms a hyper-violent insurgency of fanatics, guilty of every manner of cultural barbarism, now running a country with the population of Texas.

Source: We are no longer a serious people | The Pull Request

What is 'solarpunk'?

I've seen people on the Fediverse, including people I know and have worked with, describe themselves as 'solarpunks'. It seems like the approach is becoming more mainstream, which is no bad thing.

Lush green communities with roof top gardens, floating villages, transport fuelled by clean energy and hope-filled sci-fi tales. Imagine a world in which existing technologies are deployed for the greater good of both people and the planet.

It's called solarpunk. The term, coined in 2008, refers to an art movement which broadly envisions how the future might look if we lived in harmony with nature in a sustainable and egalitarian world.

"Solarpunk is really the only solution to the existential corner of climate disaster we have backed ourselves into as a species," says Michelle Tulumello, a solarpunk art teacher in New York state.

"If we wish to survive and keep some of the things we care about on the earth with us, it involves a necessary fundamental alteration in our world view where we change our outlook completely from competitive to cooperative."

Source: What is solarpunk and can it help save the planet? | BBC News

Global temperatures: 1980-2021

This xkcd chart starts in 1980 which is when I was born so, although it has Randall Munroe’s details on it, in some ways it also feels personal to me.

Source: xkcd: Global Temperature Over My Lifetime

Five-hour workdays

I’ve been saying for as long as anyone will listen to me that I can do a maximum of four hours high-quality knowledge work per day. Add on some time for emails and ‘sync’ meetings, and five seems about right.

The difficulty, of course, is the financial side of things. If you’re employed, will your employer pay you the same amount for working fewer hours? (even if productivity increases). And if you’re self-employed, will clients sign-off contracts that stipulate five-hour days?

The eight-hour working day is a relatively new concept, widely accepted to have been cemented by Ford Motor Company a century ago as a means of keeping production going 24 hours a day without putting undue demands on individual members of staff. Ford’s experiment led to an increase in overall productivity; but proponents of five-hour days, including Californian ecommerce business Tower Paddle Boards and German digital consultancy Rheingans, say they experienced a similar phenomenon when they moved to compressed-hour models.

Like Corcoran, Tower CEO Stephan Aarstol says he was startled by the results when the business adopted a five-hour working day in 2015. Staff worked from 8am to 1pm with no breaks and, because employees became so focused on maximising output in order to have the afternoons to themselves, turnover increased by 50 per cent.

Source: The perfect number of hours to work every day? Five | WIRED

The Cult of the Upper Classes

Today is the day that the IPCC report is released. Our response to it, in the UK at least, depends a great deal on the attitude that the upper classes have towards it. That shouldn’t be the case.

Far too many people in this country remain wedded to the cult of the upper class – a cult that should long ago have withered to a death, but which is instead enabled by the media, by stealth, and by a fawning faith in aristocracy that still prevails. Its bastard by-product, nepotism, remains rife – elevating those of little talent, charm or ability to some of the top gigs in the land.
Source: England’s Upper Classes: A Dangerous Cult – Byline Times

The Cult of the Upper Classes

Today is the day that the IPCC report is released. Our response to it, in the UK at least, depends a great deal on the attitude that the upper classes have towards it. That shouldn’t be the case.

Far too many people in this country remain wedded to the cult of the upper class – a cult that should long ago have withered to a death, but which is instead enabled by the media, by stealth, and by a fawning faith in aristocracy that still prevails. Its bastard by-product, nepotism, remains rife – elevating those of little talent, charm or ability to some of the top gigs in the land.
Source: England’s Upper Classes: A Dangerous Cult – Byline Times

Internal Google comics

I discovered these comics, made over several years by someone who worked at Google, via Hacker News. The one below I thought was a fantastic roast of the kind of 'leadership' I've seen at a few organisations.

Source: Goomics

5 main concerns of top scientists about the relaxing of UK Covid restrictions

This warning to the UK government with the ‘five main concerns’ of top scientists is quite concerning.

First, unmitigated transmission will disproportionately affect unvaccinated children and young people who have already suffered greatly. Official UK Government data show that as of July 4, 2021, 51% of the total UK population have been fully vaccinated and 68% have been partially vaccinated. Even assuming that approximately 20% of unvaccinated people are protected by previous SARS-CoV-2 infection, this still leaves more than 17 million people with no protection against COVID-19. Given this, and the high transmissibility of the SARS-CoV-2 Delta variant, exponential growth will probably continue until millions more people are infected, leaving hundreds of thousands of people with long-term illness and disability. This strategy risks creating a generation left with chronic health problems and disability, the personal and economic impacts of which might be felt for decades to come.

Second, high rates of transmission in schools and in children will lead to significant educational disruption, a problem not addressed by abandoning isolation of exposed children (which is done on the basis of imperfect daily rapid tests). The root cause of educational disruption is transmission, not isolation. Strict mitigations in schools alongside measures to keep community transmission low and eventual vaccination of children will ensure children can remain in schools safely This is all the more important for clinically and socially vulnerable children. Allowing transmission to continue over the summer will create a reservoir of infection, which will probably accelerate spread when schools and universities re-open in autumn.

Third, preliminary modelling data9 suggest the government’s strategy provides fertile ground for the emergence of vaccine-resistant variants. This would place all at risk, including those already vaccinated, within the UK and globally. While vaccines can be updated, this requires time and resources, leaving many exposed in the interim. Spread of potentially more transmissible escape variants would disproportionately affect the most disadvantaged in our country and other countries with poor access to vaccines.

Fourth, this strategy will have a significant impact on health services and exhausted health-care staff who have not yet recovered from previous infection waves. The link between cases and hospital admissions has not been broken, and rising case numbers will inevitably lead to increased hospital admissions, applying further pressure at a time when millions of people are waiting for medical procedures and routine care.

Fifth, as deprived communities are more exposed to and more at risk from COVID-19, these policies will continue to disproportionately affect the most vulnerable and marginalised, deepening inequalities.

Source: Mass infection is not an option: we must do more to protect our young | The Lancet

Skills-based hiring vs universities

This is Stephen Downes' commentary on an article by Tom Vander Ark. I think crunch time is coming for universities, especially when you think about how people are increasingly applying for jobs with portfolios, microcredentials, and proof of experience, rather than simply a CV with a degree on it.

Educators need to be aware that the marketing campaign against their unique value proposition is well underway. "Companies are missing out on skilled, diverse talent when they arbitrarily ‘require’ a four-year degree. It’s bad for workers and it’s bad for business. It doesn’t have to be this way," says former McKinsey partner Byron Auguste, who founded Opportunity@Work. "Instead of ‘screening out’ by pedigree, smart employers are increasing ‘screening in talent for performance and potential." The question for colleges and universities is this: if people no longer value your degrees and certificates, what will you be selling them when you charge them tuition fees?
Source: The Rise of Skills-Based Hiring And What it Means for Education | Stephen Downes

Mr Bingo's Zoom backgrounds

This made me laugh, especially as in the midst of the pandemic I was using a green screen and changing my Zoom background every day!

<img src=“uploads/2024/4b04687167.jpg” alt=“Stretched letters saying “Zoom is destroying my soul” />

Source: Zoom backgrounds | Mr Bingo: Artist, speaker and twat

On Twitter addiction

I used to be addicted to Twitter before it was cool to be addicted to Twitter. Back when all you got was 140 characters, and I’d find myself composing tweets about my IRL experiences and find that I was basically thinking in tweet-sized chunks.

I’ve since switched most of my attention to the Fediverse (join me?) but there’s something insidious about Twitter that pulls you back in. At least turning off the algorithmic timeline (something you have to keep doing) dials down the rage a little bit… Circle of chairs with Twitter logos

I know I’m an addict because Twitter hacked itself so deep into my circuitry that it interrupted the very formation of my thoughts. Twenty years of journalism taught me to hit a word count almost without checking the numbers at the bottom of the screen. But now a corporation that operates against my best interests has me thinking in 280 characters. Every thought, every experience, seems to be reducible to this haiku, and my mind is instantly engaged by the challenge of concision. Once the line is formed, why not put it out there? Twitter is a red light, blinking, blinking, blinking, destroying my ability for private thought, sucking up all my talent and wit. Put it out there, post it, see how it does. What pours out is an ungodly sluice of high-minded opinions, sharp rebukes, jokes, transactional compliments, and mundane bulletins from my private life (to the extent that I have one anymore).
Source: A Twitter Addict Realizes She Needs Rehab | The Atlantic

Propeller-based car that can go faster than the wind

Main-Character Energy

Before starting therapy, my wife said that she was concerned that I might “lose my superpowers”. One of the ways of thinking about this is as the Main-Character Energy discussed in this New Yorker article. It’s a vitality you bring to each day because you see yourself in a starring role.

Therapy did strip me of that, but in a good way. Instead of some Hollywood actor, I now see myself in a much more realistic light, rather in the way that social media can distort and mediate the view I have of myself. It means that I see myself a part of a whole, rather than set apart from it.

The impulse to see oneself as the focal point of the action is all the more powerful as we emerge from the dull isolation of the pandemic, when activities were limited to the likes of re-growing scallions and feeding bulbous sourdough starters. Post-covid, we want to reclaim control of our stories, exert ourselves upon the world, take our places as protagonists once more—and then post about it. During quarantine, the Internet was one of the few tethers to public connection. But publishing evidence of any social engagements, even C.D.C.-compliant ones, came with the risk of being shamed as reckless or self-indulgent. Now, suddenly, much of that fear of critique is gone. The “return of fomo,” as a recent New York cover described it, means the return of jealousy-inducing Instagram stories and glamorous TikToks.
Source: We All Have “Main-Character Energy” Now | The New Yorker

Parasocial relationships through digital media

I think we’ve all felt a close affinity and, dare I say, relationship with people who wouldn’t know who we were if we met them in real life. In fact, I’ve kind of experienced the other side of this due to my TEDx Talk and the TIDE podcast. People at events would come and talk to me as if they knew me.

It’s nice, in a way, although it makes for very one-sided conversations until you get to know people. I think it’s likely to happen again with the Tao of WAO podcast

Over the past decade, it has become increasingly common for people to develop intense one-sided relationships with famous people on the internet. What are called parasocial relationships (meaning almost social, or perversely social) have spread almost everywhere. For example, John Mulaney fans share concern over his recently messy personal life as much as they laugh at his jokes. Fans of K-pop groups like Blackpink (called Blinks) and Twice (called Onces) flood YouTube videos with millions of comments in support of their favorite performers. (“Rosé has worked so hard for this moment, let’s support her as much as we can!!”) Zoomers goof off in the chat for hours watching Twitch livestreamers play Minecraft or PUBG. Even Peloton trainers are marketed as supporting us on our fitness journeys rather than coaches who simply encourage us to sweat.

The hosts of podcasts in particular are the subject of these intense feelings of connection, as many observers, like Rachel Aroesti in this Guardian piece for instance, have pointed out. I have a few parasocial podcast obsessions myself, particularly the podcasting family the McElroy Brothers, who make the comedy advice show My Brother, My Brother and Me and the “actual play” Dungeons and Dragons podcast The Adventure Zone, among other things. I follow fan subreddits, chuckle at McElroy memes, and buy merch to support the good good boys (as they are called). I have become as much a fan of the McElroys “themselves” as I am a fan of their content. I know their childhood nicknames, their struggles with depression and social anxiety, and I know about the time Justin got fired from Blockbuster for stealing a Fight Club DVD.

Source: Why Can’t We Be Friends | Real Life

The album is no longer the unit of musical currency

I’m sitting listening to the new Kings of Convenience album while writing this. As this article points out, listening to albums is an increasingly unlikely thing to in the era of streaming music services.

This isn’t accidental: it’s easy to hop between services when the unit of currency is an ‘album’. But when it’s a regularly-updated playlist that’s only available on a particular platform (e.g. Spotify) that’s a different proposition altogether.

To help listeners find their way in the endless aisles of digital music, streaming providers created playlists — but this new way of listening has created unintended consequences for artists and songwriters. Today, three services make up two-thirds of the streaming economy: Spotify, which has an estimated 32 percent of the market, Apple Music (18 percent), and Amazon Music (14 percent). But Spotify dominates the conversation both because of its market power and its immensely popular playlists. In 2017, 68 percent of all listening on Spotify was from a company or user playlist, according to the company’s 2018 Securities and Exchange Commission filing. Its platform has more than 4 billion playlists, 3,000 of which are owned by Spotify, curated by a mix of algorithms and editors.

Its most prominent playlists have serious cultural power. RapCaviar shapes the sound of hip-hop, and can turn indie rappers into household names. The genre-agnostic, slightly quirky playlist Lorem curates the vibe for Spotify’s Gen Z listeners. In 2020, listeners ages 16 to 40 used playlists as their primary source for discovering new music on the platform, according to the company. So today, a placement atop one of its playlists can make or break a song.

Spotify isn’t shy about the marketing power of its playlists. In its SEC filing, the company wrote as much, crediting Lorde’s breakout global success to her placement on a single playlist: Sean Parker’s Hipster International. But her example may be an outlier. The challenge for most artists is that playlist listeners frequently don’t know who they’re listening to. A song with high completion rates on a playlist might end up on more playlists, accumulating millions of streams for an artist who remains effectively nameless. In the best-case scenario, these streams, which pay very low royalties compared to radio, could help land the song a coveted advertisement, or better yet, pique the attention of Top 40 radio programmers.

Source: How streaming made hit songs more important than the pop stars who sing them | Vox

Leslie Caron on Cary Grant's attitude to money

I read most things online, but I came across this one via my print subscription to Guardian Weekly (which I recommend highly). Leslie Caron, who danced and acted with a host of big names, highlights Cary Grant’s attitude towards money.

I’ve always found Cary Grant fascinating, and in fact my online avatar used to be a photo of him. It seems, as Leslie Caron points out, that one’s mindset can be out of step with reality — which is a lesson to us all.

Who was her most talented leading man? “Cary Grant,” she answers immediately. In 1964, she starred with Grant in the romcom Father Goose; Grant was 27 years her senior. “Cary was a complicated brain,” she says, pointing to her head. “He was a remarkable performer. He was very instinctive, seductive, intelligent. But when he got mad he would get into a terrible state. He worried about money.” Surely he had plenty of it? Yes, she says, but when you grow up poor you always think like a poor person. “I remember Charlie Chaplin saying to me: ‘If I were rich …’” When Chaplin died in 1977, he left more than $100m to his fourth wife, Oona.
Source: ‘I am very shy. It’s amazing I became a movie star’: Leslie Caron at 90 on love, art and addiction | The Guardian

Hemp captures more carbon than trees

I don’t think it will be long before we see fields and fields of hemp, just like we see fields of rapeseed at the moment. For example, I often wear hemp t-shirts which need to be washed way less than cotton ones.

Shah is working with the farm to develop new carbon-negative materials that could be used in manufacturing and construction."

With Margent Farm’s hemp fibres, and using 100 per cent bio-based resins, we can produce bioplastics that can replace fibreglass composites, aluminium and other materials in a range of applications," he said."

We can use the wealth of textile science knowledge that humans have gathered over thousands of years to produce a range of textile fibre composites with properties suitable for non-structural products."

Shah added that the plant has the potential to help solve a wide variety of issues."

Hemp is a terrific crop that enables us to tackle a multitude of human-generated environmental problems – air, soil and water for example – whilst being productive in offering us food, medicine and materials," he said.

Source: Hemp “more effective than trees” at carbon storage says researcher | Dezeen

Giving work oxygen

Cassie Robinson, whose work I seem to have been two steps removed from over the last decade, talks about the importance of weeknotes and working openly in general.

Her reasons for doing so?

  • It's about radiating intent
  • It’s about modelling better ways of working
  • It’s an important feedback loop
  • It’s about creating provenance for the work and for your integrity
  • It’s a double-sided coin
  • Using your positionality
Source: Visibility of the work and its possibilities | Cassie Robinson

Moving air through a building more efficiently using a fan

For those of you sweltering away inside a building, it might be better to be blowing air out of the window…

[embed]www.youtube.com/watch

This man reports that the best place to put a fan is about 2 ft from a window, facing the window, and he has numbers on a computer screen to prove it.
Source: The best place to put a fan | Boing Boing

Moving air through a building more efficiently using a fan

For those of you sweltering away inside a building, it might be better to be blowing air out of the window…

[embed]www.youtube.com/watch

This man reports that the best place to put a fan is about 2 ft from a window, facing the window, and he has numbers on a computer screen to prove it.
Source: The best place to put a fan | Boing Boing

Algorithmic work overlords

When I read articles like this that remind me of the film Elysium, I try and tell myself that, in the end, people won’t allow themselves to be treated like this.

But, on the other hand, there are always desperate people. Also, practices like this, if they become embedded in an industry, are hard to shift. This is why trade unions exist and are necessary to counter the power of huge organisations.

Flex hirings, performance reports, and firings are all handled by software, with minimal intervention by humans. Drivers sign up and upload required documents via a smartphone app, through which they also sign up for shifts, coordinate deliveries, and report problems. It’s also how drivers monitor their ratings, which fall into four broad buckets—Fantastic, Great, Fair, or At Risk. Flex drivers are assessed on a range of variables, including on-time performance, details like whether the package is sufficiently hidden from the street, and a driver’s ability to fulfill customer requests.
Source: Amazon is using algorithms with little human intervention to fire Flex workers | Ars Technica

What exactly is 'hybrid work'?

‘Future’ is a new publication from the VC firm a16z. As such, most things there, while interesting, need to be taken with a large pinch of salt.

This article, for example, feels almost right, but as a gamer the ‘multiplayer’ analogy for work breaks down (for me at least) in several places. That being said, I’ve suggested for a while that our co-op meets around a campfire in Red Dead Redemption II instead of on Zoom…

Remote 1.0. The first wave of modern “remote-first” companies (including Automattic, Gitlab, and Zapier) leaned heavily on asynchronous communication via tools like Google Docs and Slack. This involved a fundamental culture shift that most enterprises could not — and didn’t want to — undertake. It didn’t help that video conferencing technology was clumsy and unreliable, making frictionless real-time communication unfeasible. When collaboration happened, it was primarily through screen sharing: low-fidelity, non-interactive, ineffective. Rather than paving the way, technology was in the way.

Remote 2.0, the phase we’re in, more closely approximates in-person work by relying on video conferencing that allows real-time collaboration (albeit still with friction); video calls are much better now, thanks to more consumer-friendly tools like Zoom and Google Meet. Millennials and Gen-Z-ers, who are more comfortable with multimedia (video and audio as well as multi-player gaming), are increasingly joining the workforce. But while this phase has been more functional from a technical standpoint, it has not been pleasant: “not being able to unplug” has become the top complaint among remote workers. (Especially since many teams have tried to replicate a sense of in-person presence by scheduling more video calls, leading to “Zoom fatigue”). As context diminishes, building trust has become harder — particularly for new employees.

Remote 3.0 is the phase ahead of us: hybrid work. The same challenges of Remote 2.0 are magnified here by asymmetry. The pandemic leveled the playing field at first by pushing everyone to remote work; now that it’s feasible to work in-person, though, hybrid work will create a “second-class citizen” problem. Remote employees may find it much harder to participate in core company functions, to be included in casual conversations, and to form relationships with their colleagues.

Source: Hybrid Anxiety and Hybrid Optimism: The Near Future of Work | Future

The most sustainable foods?

I’m surprised at this list from The Guardian, which includes red meat. As of February, I don’t eat fish (or shellfish) so mussels are off the list for me as well.

What is important, I think, is the bit at the bottom about waste food. I’ve started putting coffee grounds on the garden, and that banana skin curry sounds… interesting!

If, as a planet, we stopped wasting food altogether, we’d eliminate 8% of our total emissions – so one easy way to eat for the planet would be to tackle that, Steel points out. That could be through preserving and making stock from meat and fish bones – but it could also be as simple as eating as much of a fruit or vegetable as possible. “The skin, the seeds, the leaves – these are where the phytonutrients are,” she says, citing Nigella’s banana skin curry as an example. Supporting companies which are repurposing waste – surplus bread into beer, surplus fruit into condiments and chutneys – is another easy win.
Source: Eat this to save the world! The most sustainable foods – from seaweed to venison | The Guardian

Decentralised organising

I update the WAO wiki page on how we make decisions today and used a graphic inspired by Richard D. Bartlett.

He, in turn, added the page to a 'handbook of handbooks' for decentralised organising.

a mega list of handbooks and toolkits

for groups working without top-down management

from social movements to workplaces

open source for anyone to read, update, share

Source: The Handbook of Handbooks for Decentralised Organising

AI for auto-generated landscapes

I’m still blown away by the canvas autofill in Photoshop, never mind AI turning blobs of virtual paint into landscapes! Incredible.

[embed]www.youtube.com/watch

Use AI to turn simple brushstrokes into realistic landscape images. Create backgrounds quickly, or speed up your concept exploration so you can spend more time visualizing ideas.
Source: NVIDIA Canvas: Turn Simple Brushstrokes into Realistic Images

95% of fish are 'dark fish'

If scientists have indeed got this correct, it’s an incredible finding.

Fish in the sea

Prof Duarte led a seven-month circumnavigation of the globe in the Spanish research vessel Hesperides, with a team of scientists collecting echo-soundings of mesopelagic fish.

He says most mesopelagic species tend to feed near the surface at night, and move to deeper layers in the daytime to avoid birds.

They have large eyes to see in the dim light, and also enhanced pressure-sensitivity."

They are able to detect nets from at least five metres and avoid them," he says."

Because the fish are very skilled at avoiding nets, every previous attempt to quantify them in terms of biomass that fishing nets have delivered are very low estimates."

So instead of different nets what we used were acoustics … sonar and echo sounders."

Source: Ninety-five per cent of world’s fish hide in mesopelagic zone | Phys.org

UK government survey into climate change and net zero

The UK government’s Department for Business, Energy & Industrial Strategy published a report today showing the results of a an online survey into public perceptions of climate change and net zero.

Broadly speaking, ‘net zero’ is supported, but most people think we’ll achieve that through energy efficiency.

GOV.UK logo

Climate change was perceived to be affecting other countries more than respondents’ local area within the UK although half of respondents (50%) felt that their local area had been affected to ‘at least some extent’.
  • Eighty-three percent of participants reported that climate change was a concern.
  • Fourteen percent of participants perceived climate change as affecting their local area by ‘a great deal’ compared to 42% of UK participants perceiving climate change as affecting other countries by ‘a great deal’.
  • Eighty-six percent of UK participants perceived other countries to be experiencing climate change effect to ‘at least some extent’.
  • Around half (54%) of participants perceived their local area to be experiencing climate change effect to ‘at least some extent’.
Source: Climate change and net zero: public awareness and perceptions | GOV.UK

Is the self-censorship the most dangerous form of censorship?

Edward Snowden, in his new newsletter, makes the case that self-censorship — the suppression of ideas that never see the light of day — is the most dangerous kind.

Without mentioning it explicitly, I think he’s talking about cancel culture and deplatforming. He has a point, but the modern western world is very different from the Soviet examples which he gives.

(Bonus points for his mention of Michel De Montaigne’s best friend, Étienne de La Boétie, who died far too young.)

NIE CENZUROWANO: “This statement is not censored.”

Unlike in Kiš's milieu, or in contemporary North Korea, or Saudi Arabia, the coercive apparatus doesn't have to be the secret police knocking at the door. For fear of losing a job, or of losing an admission to school, or of losing the right to live in the country of your birth, or merely of social ostracism, many of today's best minds in so-called free, democratic states have stopped trying to say what they think and feel and have fallen silent. That, or they adopt the party-line of whatever party they would like to be invited to — whatever party their livelihoods depend on.Such is the trickle-down effect of the institutional exploitation of the internet, of corporate algorithms that thrive on controversy and division: the degradation of the soul as a source of profit — and power
Source: The Most Dangerous Censorship | Edward Snowden

New network of sleeper trains

Team Belshaw went inter-railing a few summers ago, which included a sleeper train from Switzerland to Slovenia, and it was fantastic.

In a time when we’ll certainly be looking to fly less, this is great news.

Map of proposed network

Midnight Trains is hoping post-Covid interest in cleaner, greener travel will generate interest in its proposed “hotels on rails”, which aims to connect the French capital to 12 other European destinations, including Edinburgh.

The founders say the aim is not to match the famous – and expensive – luxury of the Orient Express but offer an alternative to the basic, state-run SNCF sleepers and short-haul flights.

Key to the service will be “hotel-style” rooms offering privacy and security, and an onboard restaurant and bar.

Source: New network of European sleeper trains planned | Rail travel | The Guardian

Why going slowly speeds teams up

If I had to characterise the default way of doing things within average companies it would, unfortunately, include giving people more and more things to do until they can’t cope.

This is extremely inefficient, as this post explains a bit more scientifically than I could ever hope to do.

Graph showing wait time exponentially increasing

The most important, but actually probably the simplest to influence, is the utilization of the team. Just plan less work and give your team some slack. But simple does not mean easy. It's very counterintuitive. “What do you mean, plan less work? How is that going to speed things up?” Well, because science says so!

But if science and beautiful math formulas fail to convince, you can reach for an example everyone should be able to understand. The analogy is not perfect, but works pretty well. I am talking of course about traffic jams on the highway. I assume you have been in few. Have you noticed, how when the number of cars on the highway starts to increase, the speed you are driving goes down a bit? And then, it reaches some kind of seemingly illogical point, where suddenly everything comes to a screeching halt, even when there is no apparent reason like an accident or closure?

Remember how the traffic experts keep telling you: “If there is a lot of traffic, slow down and avoid switching lanes to avoid causing a traffic jam?” Well, that’s because with a lot of traffic, the road has a high utilization (ie. less space between cars). By switching lanes you are increasing the variability of arrival (each segment of each lane works actually as a separate queue). By going fast, you are unable to keep driving a same constant speed like everyone else and thus increase the variability of the duration of the task. You are constantly speeding up and slowing down. The task in this case means “moving one meter forward”. Under high utilization, even slight increase in either of the two variabilities or the utilization itself has a huge effect on the queue size. The result: traffic jam.

Source: Ignore the King(man) at your own peril | Michal Táborský

How to stop being a perfectionist

This is a useful and to-the-point article about ways in which perfectionists self-sabotage, and the ways in which they can get out of their own way.

As a recovering perfectionist, I recognise these traits, and am still working on both ruminating about “weaknesses, mistakes, and failures” and applying my own high standards to others.

Perfectly-mown grass

The ways that the author notes that perfectionists can get in their own way are:

  • Struggling to make decisions or take action
  • Worrying excessively about sunk costs
  • Avoiding challenges to avoid failure
  • Applying their high standards to others
  • Ruminating about weaknesses, mistakes, and failures
...and the ways they can overcome these:
  • Learn from successes
  • Develop heuristics to enable faster decision-making and action taking
  • Ask yourself “How could I improve by 1%?”
  • Learn strategies to disrupt rumination
Source: How Perfectionists Can Get Out of Their Own Way | Harvard Business Review

There's a word for everything

I experienced some dysania this morning and made my own Bannock device using some paper yesterday to order my son some shoes online. You?

Definitions of wordsSource: The name of things , you probably didn’t know | Reddit

Lobsters and octopuses are sentient and feel pain

I stopped eating meat in November 2017 but, until February of this year, was still eating fish (including lobster and other shellfish).

That changed when, over dinner, our sporty 14 year-old son, who stopped eating meat just before the start of the pandemic, asked why he and I still ate fish if we didn't eat animals?

We stopped there and then. Once you've seen something like My Octopus Teacher, I don't know how I ever saw such creatures as food.

Octopus

The Animal Welfare (Sentience) Bill recognises animal sentience - which is the capacity of animals to have feelings, including pain and suffering.

It currently says fish, and other vertebrates which feel pain, should be protected as much as possible.

Animals like lobsters and octopus are not currently protected by the bill because as invertebrates, their body is different to ours, so they aren't thought to have those complex feelings, says a report by the Conservative Animal Welfare Foundation (CAWF).

The report says arguments against recognising these species focuses on physical differences between these animals and humans - but this fails to understand what it means for an animal to have feelings.

It says those species "undoubtedly experience the world in extremely different ways to ourselves," but what matters is whether they feel pleasure and pain.

Source: MPs: Octopuses feel pain and need legal protection | BBC News

Leadership is contextual

This article feels quite foreign to me as a member of a co-operative, but it contains an important insight. I feel that there’s more nuance than the author provides, in that leadership is contextual.

Some people believe that they are a ‘leader’ because their job title says so. But true leadership comes when people choose to follow you, not be coerced into something because you’re higher up the pyramid than they are.

For as long as I can remember, leadership was the expectation. If you wanted to move up in the world, you had to be a leader: in school, at work, in your extracurriculars. Leadership was the golden ticket, and the more opportunities you took, the closer you’d get to owning the whole chocolate factory.
Source: What to do if you don't want to be a leader | Fast Company

How becoming a father changes men

It’s Father’s Day today, in the UK at least. My children, who both delight and infuriate in equal measure, spoiled me with some thoughtful presents.

This article touches on something I’ve observed in others and myself: becoming a father really does change men. As the diagram below shows, that happens in terms of testosterone, but in my experience being a dad changes your worldview.

Diagram showing testosterone levels reducing as children are born

New fathers show reduced testosterone, which may help them be more nurturing to their newborn children. Scientists sampled testosterone levels of more than 450 men in the Philippines in 2005 and again in 2009. All the men showed a slight decrease in testosterone levels (morning testosterone levels shown here), which is to be expected as they age. Men with newborn infants showed a much greater drop, however. Their testosterone returned to expected levels as their children grew up.
Source: Evolution of the dad | Knowable

Online personas and liquid modernity

blue black icon

Drew Austin references Zygmunt Bauman, an author I referenced in my thesis, in relation to personhood and social media. Really interesting.

Austin’s blog, which he seems to have abandoned in favour of a newsletter, discussed his friend recommending the creation of an an ‘alt’ persona “in order to break free of some of the restrictions that an online persona imposes.” I find this interesting in light of my thinking about nuking everything and starting again.

(PS what are we calling Substack newsletter displayed on the internet these days? I think I’ll just call them web pages.)

In his 2000 book Liquid Modernity, Bauman wrote: “Seen from a distance, (other people’s) existence seems to possess a coherence and a unity which they cannot have, in reality, but which seems evident to the spectator. This, of course, is an optical illusion. The distance (that is, the paucity of our knowledge) blurs the details and effaces everything that fits ill into the Gestalt. Illusion or not, we tend to see other people’s lives as works of art. And having seen them this way, we struggle to (make our lives) the same.”

[…]

As Bauman presciently realized, the constraints of these digital environments and the sheer volume of users endows even the flimsiest online presences with an illusion of unity. Showing up frequently enough in the feed might elevate one’s presence to a work of art, at least from everyone else’s distracted perspective, and this in turn motivates us all to present our own selves more artfully. The speed of the information flow is essential to the entire illusion: A platform like Twitter makes our asynchronous posts feel like real-time interaction by delivering them in such rapid succession, and that illusion begets another more powerful one, that we’re all actually present within the feed.

[…]

Something I frequently joke about—a dark truth that begs for humor—is how social media requires continuous posting just to remind everyone else you exist. I once said that if Twitter was real life our bodies would always be slowly shrinking, and tweeting more would be the only way to make ourselves bigger again. We can always opt out of this arrangement, of course, and live happily in meatspace, but that is precisely the point: Offline we exist by default; online we have to post our way into selfhood. Reality, as Philip K. Dick said, is that which doesn’t go away when you stop believing in it, and while the digital and physical worlds may be converging as a hybridized domain of lived experience and outward perception, our own sustained presence as individuals is the quality that distinguishes the two.

Source: #162: Minimum Viable Self | Kneeling Bus

The ideology of e-s-c-a-p-e

Book cover: 'Deep Adaptation: Navigating the Realities of Climate Chaos' edited by Jem Bendell and Rupert Read

Taken from Jem Bendell's chapter ‘Deeper Implications of Societal Collapse: Co-liberation from the Ideology of E-s-c-a-p-e’ in the new book Deep Adaptation: Navigating the Realities of Climate Chaos, edited by Jem Bendell and Rupert Read.

The chapter is an auto-ethnographic one where Bendell examines his own assumptions and motivations for writing.

Entitlement involves thinking, 'I expect more of what I like and to be helped to feel fine.'

Surety involves thinking, 'I will define you and everything in my experience, so I feel calmer.'

Control involves thinking, 'I will try to impose on you and everything, including myself, so I feel safer.'

Autonomy involves thinking and feeling, 'I must be completely separate in my mind and being because otherwise I would not exist.'

Progress involves thinking and feeling, 'The future must contain a legacy from me, or make sense to me now, because if not, when I die, I would die even more.'

Exceptionalism means assuming, 'I am annoyed in this world because much about it upsets me and so I believe I'm better and/or needed.'

He continues:

To reject the ideology of e-s-c-a-p-e is to have little place in public discourse today. That is not by accident. The ideology of e-s-c-a-p-e has been conducive to the rise of certain power relations which are embedded in capitalism and all political systems. That ideology is reproduced and spreads through those economic and political systems. There is a relationship between material contexts and the deep rules or 'operating systems' of all societies and economies, on the one hand, and the ideologies that become widespread on teh other. You may recall that Karl Marx once wrote about how the 'mode of production' of goods and services incentivizes certain ways of understanding oneself, the world and society (Cole 2007). It is clear that the 'mode of transaction and consumption' is as important as the mode of production for how we understand ourselves and the world. There is an iterative relationship between material contexts on the one hand and ideas about self and society on the other, especially when those ideas reshape what is considered (or is possible to experience as) a material resource.

Cultural complexes contributing to the climate crisis

Book cover of 'Deep Adaptation: Navigating the Realities of Climate Chaos', edited by Jem Bendell and Rupert Read

Taken from Adrian Tait's chapter 'Climate Psychology and Its Relevance to Deep Adaptation' in the new book Deep Adaptation: Navigating the Realities of Climate Chaos, edited by Jem Bendell and Rupert Read.

What I like about it is that it cuts to the root of much of what is wrong with western societies — the symptom of which is the climate crisis.

(i) the assumption that value is determined by monetary wealth and the monetization of everything;

(ii) the consumerist paradigm of well-being, in which desire for sex, status and fantasies of security are exploited. One example is the current book in sport utility vehicle (SUV) sales, obliterating the emissions savings due to electrification of transport;

(iii) the 'no such thing as society' trope which defines us as isolates rater than members of a collective. The myth is one of liberation and motivation, but its main effect is to dehumanize;

(iv) the generalized belief that competition rather than cooperation is the natural condition for humanity and the main driver of progress. Competitive sport often (but not always) reinforces this;

(v) the 'culture of uncare', as outlined by Sally Weintrobe;

(vi) entitlement — the notion that we are not just special but at complete liberty to dominate, exploit and destroy. This myth has some religious underpinnings. It is also a close relative of colonialism. Entitlement includes expansion and incursion — a prime factor in zoonotic diseases like Covid-19 (Tait 2020);

(vii) species autonomy — the delusion that, with our brilliance, ingenuity, technology and built environment, we have created the world, a bubble in which we're above wider nature, rather than being dependent on the natural world in myriad ways.

Improv as a tool for building better products

I’m a fan of metaphor and productive ambiguity, and so I like this improv approach to product development.

Some improv scenes are initiated with a generic line and performers extract the game organically. e.g. "I can't believe it's midnight" is an intriguing start to a scene but there's no obvious game. In contrast, some improv scenes are initiated with strong game right away. e.g. initiating the scene with "No, you're an accountant, you can't just become a lion tamer". Both ways can lead to hilarious scenes.

Likewise, some products are initiated with a rough idea. This is in the camp of Eric Reis' model, where you’re lean, get feedback, and iterate quickly. The idea is to treat the path to product market fit as a series of experiments with hypotheses. In contrast, there is Keith Rabois' model, where you have a strong vision from day 0 and not much changes from then. The idea is that you have a master plan from the start, and you get heads down on executing it. Check this post by Casey Winters comparing these models with far more nuance.

Source: Your product is a joke | The Paperclip

"This is extremely dangerous to our democracy"

Depending on what happens next year and in 2024, the US might not even be a democracy within this decade…

[embed]www.youtube.com/watch

Source: Multiple local news stations say the same thing verbatim | YouTube

Information means nothing by itself

I had reason to reference this image today, which is an update of the classic gapingvoid cartoon. The point I was making is that a lot of organisations think that they revolutionise learning by connecting people to knowledge.

However, as every educator should know, it’s the connections between bits of information, including context and application, which constitutes the learning experience. The thing that gets missed most often, of course, is the “so what?” — i.e. the impact.

PS- the above image is from the (seemingly) never-ending, information-knowledge meme, originally done as part of building a culture of innovation for our friends over at Genentech. They were happy, the idea lives on. This is how you turn change into movements 🙂
Source: Want to know how to turn change into a movement? | Gapingvoid

Value and liquidity of skills

This is a really nice way of explaining value within jobs and careers. Not only do you have to be good, but other people need to know about it.

It’s easy to make the mistake of conflating how much money you can make with how valuable your skill is. People think that being a doctor or a lawyer or an engineer is of fundamentally more value to society than being a chef or a musician, because they tend to make much more money. But the reality is that if one job makes more money than another, it’s generally not because that labor or skill is fundamentally more valuable, it’s just more liquid, more easily converted to money, or simply less replaceable.

Your ability to have a good career is the product of two things: the fundamental value and liquidity of the skills you have. So, when applied to job hunting, this means that there are really only two things that matter.

  • How good you are
  • How many people that influence hiring decisions know how good you are
All of the games people play to get an edge in hiring, like polishing resumes, practicing interviews, or going to networking events, are simply the popular ways of maximizing one of these two quantities. These small tactical pieces of advice can be useful, but I find it helpful to know what the ultimate goals are: to be good, and to have as many people know that as possible.
Source: Liquidity of skill | thesephist.com

Organic Maps

I really like Google Maps, but I don’t like how much data it hoovers up. I also don’t like how focused it is on urban areas, so this looks good…

Organic Maps is an Android & iOS offline maps app for travelers, tourists, hikers, and cyclists based on top of crowd-sourced OpenStreetMap data and curated with love by MAPS.ME founders.
Source: Organic Maps

The Puritan Class

Nigerian author Chimamanda Ngozi Adichie reflects on sanctimonious social media:

In certain young people today... I notice what I find increasingly troubling: a cold-blooded grasping, a hunger to take and take and take, but never give; a massive sense of entitlement; an inability to show gratitude; an ease with dishonesty and pretension and selfishness that is couched in the language of self-care; an expectation always to be helped and rewarded no matter whether deserving or not; language that is slick and sleek but with little emotional intelligence; an astonishing level of self-absorption; an unrealistic expectation of puritanism from others; an over-inflated sense of ability, or of talent where there is any at all; an inability to apologize, truly and fully, without justifications; a passionate performance of virtue that is well executed in the public space of Twitter but not in the intimate space of friendship.

I find it obscene.

There are many social-media-savvy people who are choking on sanctimony and lacking in compassion, who can fluidly pontificate on Twitter about kindness but are unable to actually show kindness. People whose social media lives are case studies in emotional aridity. People for whom friendship, and its expectations of loyalty and compassion and support, no longer matter. People who claim to love literature – the messy stories of our humanity – but are also monomaniacally obsessed with whatever is the prevailing ideological orthodoxy. People who demand that you denounce your friends for flimsy reasons in order to remain a member of the chosen puritan class.

Source: IT IS OBSCENE: A TRUE REFLECTION IN THREE PARTS | Chimamanda.com

Monetizing stupidity?

Nothing surprising about attractive person + financial advice getting people interested, but I thought this was interesting from the ‘monetizing stupid’. Do you interact with the world as it is, or as you want it to be?

I focus pretty squarely on the latter, but there’s lots of money to be made from the former…

Everything in me wants to make fun of Altman here (and anyone who reads horoscopes for that matter). I want to say: “Hey, don’t you think it’s a little ridiculous to think that astrology (which is just another name for fake science) has any bearing whatsoever on imaginary digital tokens idolized by virgins!?”

But I won’t say that, because I think she might actually be some sort of accidental genius. Credit to me for showing self-control.

She’s taken 2 things that people go absolutely bat-shit crazy over (astrology & crypto) and smashed them together in bite-sized clips made so that even an ADHD-riddled-crypto-obsessed chimpanzee can digest them.

Source: Monetizing stupid | Contemporary Idiot

Open Badges Verifiable Credentials

I’m really grateful for people like Kerri Lemoie who understand digital credentials both technically and educationally, and have the time (she now works at Badgr) to steer this in the right direction.

Verifiable Credentials put learners in the center of a trust triangle with issuers and verifiers. They also add an additional layer of verification for the recipients. Open Badges can take advantage of this, be the first education-focused digital credential spec to promote personal protection of and access to data, and be part of the growing ecosystem that is exchanging Verifiable Credentials.
Source: Open Badges as Verifiable Credentials | Kerri Lemoie

Criminals' right to be forgotten

This is interesting: the Associated Press are no longer going to name people involved in minor crimes. I have to agree with their rationale.

These minor stories, which only cover an arrest, have long lives on the internet. AP’s broad distribution network can make it difficult for the suspects named in such items to later gain employment or just move on in their lives.

Broadly speaking, when evaluating such stories, we should consider first whether the story is worthy of our news report, and if distributing it is indeed useful to our members and customers. If the answer is yes, in keeping with AP’s commitment to fairness, we now will no longer name suspects in brief stories about minor crimes in which there is little chance AP will provide coverage beyond the initial arrest.

Source: AP Definitive Source | Why we’re no longer naming suspects in minor crime stories

The end of cookie banners?

This is the draft a new standard (spec) to hopefully get rid of those annoying cookie banners. We went through all of this with Do Not Track, so let’s see if this approach ends up working… 🤞

ADPC is a proposed automated mechanism for the communication of users’ privacy decisions. It aims to empower users to protect their online choices in a human-centric, easy and enforceable manner. ADPC also supports online publishers and service providers to comply with data protection and consumer protection regulations.
Source: ADPC: A Human-centric and Enforceable Privacy Specification

Positive deviance in the workplace

This article is based around a story about NASA engineers in the 1980s, but touches on something that I feel that we know instinctively. While every company will say they welcome risk-takers and rulebreakers, the reality is very different.

It’s one of the reasons I work with my co-op colleagues in solidarity. We can do what others cannot.

There is psychological evidence that rebelliousness is essential for creativity. Harvard psychiatrist Albert Rothenberg spent more than five decades researching individuals who had made ground-breaking contributions to science, literature and the arts, seeking to understand what drove their creativity. As part of a broader research project that encompassed structured interviews, experimental studies and documentary analysis, Rothenberg interviewed 22 Nobel Laureates. He found that they were strongly emotionally driven by wanting to create something new, rather than extend current perspectives. He found they consciously saw things with a fresh mindset rather than blindly following established wisdom – two qualities that would seem to suggest a rebellious, rather than conformist, personality.
Source: 'Positive deviants': Why rebellious workers spark great ideas | BBC Worklife

Slow travel and camping in other people's gardens

A lazy way to describe this would be ‘Airbnb for camping’ but actually, it’s green, anti-capitalist and community-oriented. I might list my garden (as there aren’t many in the UK right now).

Welcome To My Garden is a not-for-profit network of citizens offering free camping spots in their gardens to slow travellers.
Source: Welcome To My Garden

Generative art

We’re going to see a lot more of this in the next few years, along with the predictable hand-wringing about what constitutes ‘art’.

Me? I love it and would happily hang it on my wall — or, more appropriately, show it on my Smart TV.

Fidenza is my most versatile generative algorithm to date. Although it is not overly complex, the core structures of the algorithm are highly flexible, allowing for enough variety to produce continuously surprising results. I consider this to be one of the most interesting ways to evaluate the quality of a generative algorithm, and certainly one that is unique to the medium. Striking the right balance of unpredictability and quality is a difficult challenge for even the best artists in this field. This is why I’m so excited that Fidenza is being showcased on Art Blocks, the only site in existence that perfectly suits generative art and raises the bar for developing these kinds of high-quality generative art algorithms.
Source: Fidenza — Tyler Hobbs

Social media is done

Ironically enough, I discovered the author Rick Wayne via his posts on the Fediverse. He's decided that he's done with social media, and has a new newsletter on Substack.

His old newsletter, which I signed up for only recently, doesn't have a public-facing version I can link to. I did, however, want to share a quotation from it in which Wayne announces his new project:

Social media is done. That’s not to say it will die, but it’s not what it was just five years ago. It used to feel like we were really making friends. People would HIRL and travel to meet each other. Now, it feels like one big church potluck. We trade polite nothings with the fellows in our sect because those other people are dangerous, and let’s face it: empty promises are better than no promises at all.

Well put.

Conceptual integrity

As a project manager, as a product manager, and as a consultant, the thing that often frustrates me is the desire to go full steam ahead without a shared understanding of what it actually is that we're supposed to be doing.

Dorian Taylor, in a wider-ranging piece about Agile, talks about this as conceptual integrity:

The one idea from the 1970s most conspicuously absent from Agile discourse is conceptual integrity. This—another contribution from Brooks—is roughly the state of having a unified mental model of both the project and the user, shared among all members of the team. Conceptual integrity makes the product both easier to develop and easier to use, because this integrity is communicated to both the development team and the user, through the product.

Without conceptual integrity, Brooks said, there will be as many mental models as there are people on the team. This state of affairs requires somebody to have the final say on strategic decisions. It furthermore requires this person to have diverse enough expertise to mentally circumscribe—and thus have a vision for—the entire project in every way that was important, even if not precisely down to the last line of code.

Source: Agile as Trauma | dorian taylor

Dunbar's friendship circles

This is interesting: the number of people say they have in different friendship ‘circles’. Extroverts tend to have more than introverts.

Those numbers are aspirational, right? 😅

Dunbar’s number really isn’t a single number. It should be a series of numbers. When collecting data on personal friendships, we asked everybody to list out everybody in their friendship circles, when they last saw them, and how emotionally close they felt to them on a simple numerical scale. Relationships turned out to be highly structured in the sense that people didn’t see or contact everybody in their social network equally. The network was very clumpy.

The distribution of the data formed a series of layers, with each outer layer including everybody in the inner layer. Each layer is three times the size of the layer directly preceding it: 5; 15; 50; 150; 500; 1,500; 5,000.

The innermost layer of 1.5 is [the most intimate]; clearly that has to do with your romantic relationships. The next layer of five is your shoulders-to-cry-on friendships. They are the ones who will drop everything to support us when our world falls apart. The 15 layer includes the previous five, and your core social partners. They are our main social companions, so they provide the context for having fun times. They also provide the main circle for exchange of child care. We trust them enough to leave our children with them. The next layer up, at 50, is your big-weekend-barbecue people. And the 150 layer is your weddings and funerals group who would come to your once-in-a-lifetime event.

The layers come about primarily because the time we have for social interaction is not infinite. You have to decide how to invest that time, bearing in mind that the strength of relationships is directly correlated with how much time and effort we give them.

Source: Robin Dunbar Explains Humans' Circles of Friendship | The Atlantic

Remote workers clock up more hours, says one study

It takes time and/or training to transition fully to remote working. If it’s not something you’ve chosen (say, because of the pandemic) then that’s doubly-problematic.

really enjoy working remotely. I miss travelling for events and meetups, which I used to do probably 10-15 times per year, but the actual working from home part is great. As I type this I’m in my running stuff waiting for the Tesco delivery. Work happens around life, rather than the other way round.

This article talks about one study, which I don’t think is illustrative of the wider picture. What I do recognise, however, is the temptation to work more hours when you live in your workplace. You have to be strict.

Ultimately, it comes down to control. If you’re in control of your time, then eventually you spend it productively. For example, I work fewer than 30 hours per week in an average week, mainly because I don’t attend meetings I don’t have to.

Early surveys of employees and employers found that remote work did not reduce productivity. But a new study* of more than 10,000 employees at an Asian technology company between April 2019 and August 2020 paints a different picture. The firm uses software installed on employees’ computers that tracked which applications or websites were active, and whether the employee was using the keyboard or a mouse. (Shopping online didn’t count.)

The research certainly concluded that the employees were working hard. Total hours worked were 30% higher than before the pandemic, including an 18% increase in working outside normal hours. But this extra effort did not translate into any rise in output. This may explain the earlier survey evidence; both employers and employees felt they were producing as much as before. But the correct way to measure productivity is output per working hour. With all that extra time on the job, this fell by 20%.

Source: Remote workers work longer, not more efficiently | The Economist

Remote workers clock up more hours, says one study

It takes time and/or training to transition fully to remote working. If it’s not something you’ve chosen (say, because of the pandemic) then that’s doubly-problematic.

really enjoy working remotely. I miss travelling for events and meetups, which I used to do probably 10-15 times per year, but the actual working from home part is great. As I type this I’m in my running stuff waiting for the Tesco delivery. Work happens around life, rather than the other way round.

This article talks about one study, which I don’t think is illustrative of the wider picture. What I do recognise, however, is the temptation to work more hours when you live in your workplace. You have to be strict.

Ultimately, it comes down to control. If you’re in control of your time, then eventually you spend it productively. For example, I work fewer than 30 hours per week in an average week, mainly because I don’t attend meetings I don’t have to.

Early surveys of employees and employers found that remote work did not reduce productivity. But a new study* of more than 10,000 employees at an Asian technology company between April 2019 and August 2020 paints a different picture. The firm uses software installed on employees’ computers that tracked which applications or websites were active, and whether the employee was using the keyboard or a mouse. (Shopping online didn’t count.)

The research certainly concluded that the employees were working hard. Total hours worked were 30% higher than before the pandemic, including an 18% increase in working outside normal hours. But this extra effort did not translate into any rise in output. This may explain the earlier survey evidence; both employers and employees felt they were producing as much as before. But the correct way to measure productivity is output per working hour. With all that extra time on the job, this fell by 20%.

Source: Remote workers work longer, not more efficiently | The Economist

Fractional dosing of COVID vaccines may help more people get immunity faster

The advice to date has, quite rightly, to get any COVID vaccine that’s available to you. For me, that’s meant a double dose of AstraZeneca, and I’m happy about that.

But as the pandemic progresses, we need to be aware that some vaccines are more effective than others. This working paper, building on one published in Nature earlier this year, looks at how ‘fractional dosing’ of the Moderna and Pfizer vaccines could reach more people more quickly.

Needless to say, we shouldn’t be in the position where people in less developed countries are getting access to vaccines much more slowly than the rest of the world. But, pragmatically speaking, this may help.

We supplement the key figure from Khoury et al.’s paper to show that fractional doses of the Moderna and Pfizer vaccines have neutralizing antibody levels (as measured in the early phase I and phase II trials) that look to be on par with those of many approved vaccines. Indeed, a one-half or one-quarter dose of the Moderna or Pfizer vaccine is predicted to be more effective than the standard dose of some of the other vaccines like the AstraZeneca, J&J or Sinopharm vaccines, assuming the same relationship as in Khoury et al. holds. The point is not that these other vaccines aren’t good–they are great! The point is that by using fractional dosing we could rapidly and safely expand the number of effective doses of the Moderna and Pfizer vaccines.

[…]

One more point worth mentioning. Dose stretching policies everywhere are especially beneficial for less-developed countries, many of which are at the back of the vaccine queue. If dose-stretching cuts the time to be vaccinated in half, for example, then that may mean cutting the time to be vaccinated from two months to one month in a developed country but cutting it from two years to one year in a country that is currently at the back of the queue.

Source: A Half Dose of Moderna is More Effective Than a Full Dose of AstraZeneca | Marginal REVOLUTION

People pay selective attention to what they deem important

I really enjoyed this article, ostensibly about the amazing vocal technique of one Charles Kellogg who could “put out fire by singing”. Apparently he could also imitate birdsong perfectly. There’s an interesting video at the end of the article about that.

More interesting to me, however, is the anecdote about what Kellogg’s ear was attuned to, even in a busy urban environment.

Perhaps the most revealing anecdote tells of him walking down the street during a visit to New York, when Kellogg stopped short at the intersection of Broadway and West 34th Street. He turned to his companion and said: “Listen, I hear a cricket.” His friend responded: “Impossible—with all this racket you couldn’t hear a tiny sound like that.” And it was true: cars, trolleys, passersby, shouting newspaper vendors created such a hustle and bustle that no cricket could possibly be discerned in the hubbub.

But, true to his word, Kellogg scrutinized their busy surroundings, and a moment later crossed the street with his companion following along—and there on a window ledge pointed to a tiny cricket. “What astonishing hearing you have,” his friend marveled. But instead of responding, Kellogg reached into his pocket and pulled out a dime, which he dropped on the sidewalk. The moment the coin hit the pavement it made a small pinging noise, and everybody within 50 feet of the sound stopped and started looking for the coin. People listen for what’s most important for them, he later explained: for New Yorkers it’s the sound of money, for Charles Kellogg it was the chirping of a cricket.

Source: The Man Who Put Out Fires with Music | Culture Notes of an Honest Broker

No more simplified URLs in Chrome

On balance, I’m pleased that this ‘experiment’ is being put to rest. Although I’m for simplifying needlessly-complex aspects of the web, my previous work on web literacy would suggest that there’s a certain amount of knowledge and understanding people need to be able to have to read, write, and participate on the web effectively. Example of domain showing instead of full URL

At the time, Google said that the reason for running the experiment was that showing full URLs makes it harder for non-technical users to distinguish between legitimate and malicious (phishing) sites, many of which use complicated and long URLs in attempts to confuse users.

Showing only the domain name was considered a good way to remove the extra chaff from a complex URL and only leave the core domain visible in the URL bar.

If users wanted to view the full link, they could click or hover the Chrome address bar to reveal the rest of the page URL.

However, despite its good intentions, the experiment never sat well, with both security experts and end-users alike, who often complained about it when Google silently enabled it on some browsers to gather usage statistics.

Source: Google abandons experiment to show simplified domain URLs in Chrome | The Record

A point-based system for email address pronounceability 

My personal email address scores 1, my co-op email address (because it ends in .coop) gets a 2.

You're at the doctors office, talking to an aquaintence, or ordering something on the phone and they ask the question: What's your email? Depending on your name, age, and your life choices this can be a breeze or the dreaded question. How long does it take before you have to break out the phonetic alphabet? How many times do you have to repeat it?

Today we’re going to come up with a scoring system to measure how painful your email is to tell someone. It’s a golf scoring system with low scores being easy and each point is one unit of struggle for both you and the recipient.

Source: How Hard is your Email to Say?

The end of petrol stations

Another article looking at the future of electric vehicles. I particularly like the section where it talks about how, if you were trying to sell the idea of petrol stations these days, you'd never get anyone to sign off the health and safety side of things.

Electric vehicle optimists paint a world where you can plug in anywhere you park - at home while you sleep, as you work, when you are shopping or at the cinema.

Pretty much whatever you are doing, energy will be flowing into your car.

At this point, says Erik Fairbairn, 97% of electric car charging will happen away from petrol pump equivalents.

"Imagine someone came around and filled up your car with petrol every night so you had 300 miles of range every morning," he says. "How often would you need anything else?"

In this brave new world, you'll only ever pull over into a service station on really epic, long journeys when you'll top up your battery for 20-30 minutes while you have a coffee and use the facilities.

Source: Why it's the end of the road for petrol stations | BBC News

A glimpse into the future of autonomous electric vehicles

Ideally, we’d all be using mass transit rather than just switch fossil fuel-based vehicles for their electric equivalents. But, as a student of human nature, I recognise that autonomous electric vehicles might be a pragmatic stop-gap.

This is an interesting article, as it puts a price on how much these vehicles might cost by the hour (~7 Euros) and talks about what people might be doing while waiting for them to recharge (playing video games!)

The German automaker is considering charging an hourly fee for access to autonomous driving features once those features are ready. The company is also exploring a range of subscription features for its electric vehicles, including “range or performance” increases that can be purchased on an hourly or daily basis, said Thomas Ulbrich, a Volkswagen board member, to the German newspaper Die Welt. Ulbrich said the first subscription features will appear in the second quarter of 2022 in vehicles based on Volkswagen’s MEB platform, which underpins the company's new ID.3 compact car and ID.4 crossover.
Source: What would you pay for autonomous driving? Volkswagen hopes $8.50 per hour | Ars Technica

Health and sanity before profit

This is an interesting article that takes the tennis player Naomi Osaka’s withdrawal from the French Open as a symptom of wider trends in the workplace.

Many Americans have experienced burnout, and its adjacent phenomenon, languishing, during the pandemic. Unsurprisingly, it has hit women, especially mothers, particularly hard and women’s professional ambition has suffered, according to a survey by CNBC/SurveyMonkey. This trend might be read as a grim step backward in the march toward gender egalitarianism. Or, as in some of the criticism of Ms. Osaka, as an indictment of younger generations’ work ethic. Either interpretation would be misguided.

A better way of putting it: Ms. Osaka has given a public face to a growing, and long overdue, revolt. Like so many other women, the tennis prodigy has recognized that she has the right to put her health and sanity above the unending demands imposed by those who stand to profit from her labors. In doing so, Ms. Osaka exposes a foundational lie in how high-achieving women are taught to view their careers.

In a society that prizes individual achievement above most other things, ambition is often framed as an unambiguous virtue, akin to hard work or tenacity. But the pursuit of power and influence is, to some extent, a vote of confidence in the profit-driven myth of meritocracy that has betrayed millions of American women through the course of the pandemic and before it, to our disillusionment and despair.

Source: Naomi Osaka and the Cost of Ambition | The New York Times

Rationalising work for the 40+ brigade

Buried towards the bottom of an update about the Breaking Smart newsletter, Venkatesh Rao includes this diagram and links to a post where he commits to longer-term work.

In an associated Twitter thread, one tweet talks about one way of telling whether a project is a new TLP (‘Top Level Project’) or part of an existing one: does it require a new name or domain name? Interesting.

I’m trying to rationalize all my activities to be simpler and easier to manage. An important first step for me was shutting down my other newsletter, Art of Gig, a month ago. Another was adopting 2 long-term rules across my projects — no new top-level projects, and minimum 10-year commitments.
Source: Upcoming Changes | Breaking Smart

Anti-social media

As I mentioned on my blog recently, I sometimes feel a strong pull to ‘nuke’ everything and start over again. With Twitter, I actually did this back in 2017, deleting 77.5k spanning 10 years. They now auto-delete every three months.

This article is based on a survey that BuzzFeed News carried out which revealed a shift in attitude, especially among younger people, to social media. (I think we need a different name for social media that any member of the public can see and those that are private to your followers by default?)

Trying to live in the moment isn’t just difficult because so many of us are prone to documenting our days, our phones and social media apps are also intent on continually resurfacing aspects of our past. While some respondents said they were happy to have the reminders (one mentioned loving comments popping up from her late grandmother — “she was hilarious!”), others had more mixed or flat-out negative feelings.

Seeing versions of ourselves from 5 or 10 years ago can be cringey, which is why a lot of respondents have purged old posts altogether. Ashlee Burke from Boston, who’s in her late 20s, said she made her old Facebook photo albums private because they’re embarrassing, not because they showed any illegal activity or anything — “unless it’s illegal to be the most embarrassing teenager on the face of the Earth.”

Source: COVID Made People Delete Facebook And Instagram | BuzzFeed News

AI-generated misinformation is getting more believable, even by experts

I’ve been using thispersondoesnotexist.com for projects recently and, honestly, I wouldn’t be able to tell that most of the faces it generates every time you hit refresh aren’t real people.

For every positive use of this kind of technology, there are of course negatives. Misinformation and disinformation is everywhere. This example shows how even experts in critical fields such as cybersecurity, public safety, and medicine can be fooled, too.

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation—flagged and unflagged—has been aimed at the general public. Imagine the possibility of misinformation—information that is false or misleading—in scientific and technical fields like cybersecurity, public safety, and medicine.There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that it’s possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.
Source: False, AI-generated cybersecurity news was able to fool experts | Fast Company

The end of the Millennial Lifestyle Subsidy

What goes up must come down. In this case with prices of services backed by VC firms, the reverse is true…

For years, these subsidies allowed us to live Balenciaga lifestyles on Banana Republic budgets. Collectively, we took millions of cheap Uber and Lyft rides, shuttling ourselves around like bourgeois royalty while splitting the bill with those companies’ investors. We plunged MoviePass into bankruptcy by taking advantage of its $9.95-a-month, all-you-can-watch movie ticket deal, and took so many subsidized spin classes that ClassPass was forced to cancel its $99-a-month unlimited plan. We filled graveyards with the carcasses of food delivery start-ups — Maple, Sprig, SpoonRocket, Munchery — just by accepting their offers of underpriced gourmet meals.

These companies’ investors didn’t set out to bankroll our decadence. They were just trying to get traction for their start-ups, all of which needed to attract customers quickly to establish a dominant market position, elbow out competitors and justify their soaring valuations. So they flooded these companies with cash, which often got passed on to users in the form of artificially low prices and generous incentives.

Now, users are noticing that for the first time — whether because of disappearing subsidies or merely an end-of-pandemic demand surge — their luxury habits actually carry luxury price tags.

Source: Farewell, Millennial Lifestyle Subsidy | The New York Times

Briar now does pictures

Briar isn’t the kind of app you necessarily use every day and, in fact, it positions itself as a something used by activists. That being said, it’s really useful that there’s now the ability to send images to other users.

I’ve tested the feature (which requires both parties to be on v1.3) and it works well.

The Briar Project released version 1.3 of its Android app today. Thanks to support from eQualit.ie, this release adds several new features that have been requested by many users over the years.With today’s release, users can upload profile pictures that will be visible only to their contacts.Lots of people have asked for a way to send images via Briar. We listened! This release adds the ability to send images in private conversations. Images are still heavily compressed, so high resolution images might show pixel artifacts.
Source: Briar 1.3 released 

Who's the pet? Tarantula or tiny frog?

I read recently that some tarantulas keep tiny frogs as ‘pets’. Of course, I had to do some more digging and found out that’s not quite true, and if it were, it would be more like the other way around (as some tarantulas are so docile!)

I’ve seen some “sources” (and I really do use the word source here in it’s least possible capacity) try to say that the frogs eat potential nuisances to the spider like ants, mites and other nasties which is why the spider keeps it around. This is silly for a few reasons. 1. Tarantulas line the entire length of their burrow (which can be several feet deep) with thick sticky web. This prevents things like small insects from burrowing into or walking into their homes. Anything large enough to do so is too big to be eaten by the frog. 2. These frogs don’t even eat stuff like mites or springtails. The prey items are far too small for them to bother with. 3. If there are enough ants to bother a tarantula of this size the frog is going to die if it sticks around anyway.
Source: Is it true that some tarantulas keep tiny frogs as pets? - Quora

More US electoral chaos to come in 2024?

Difficult to argue against this scenario.

The scenario then goes like this. The Republicans win back the House and Senate in 2022, in part thanks to voter suppression. The Republican candidate in 2024 loses the popular vote by several million and the electoral vote by the margin of a few states. State legislatures, claiming fraud, alter the electoral count vote. The House and Senate accept that altered count. The losing candidate becomes the president. We no longer have “democratically elected government.” And people are angry.
Source: The Last Free Election in America | Kottke

Epistemological chaos and denialism

Good stuff from Cory Doctorow on how Big Tobacco invented the playbook and it’s been refined in other industries (especially online) ever since.

Denial thrives on epistemological chaos: a denialist doesn’t want to convince you that smoking is safe, they just want to convince that it’s impossible to say whether smoking is safe or not. Denial weaponizes ideas like “balance,” demanding that “both sides” of every issue be presented so the public can decide. Don’t get me wrong, I’m as big a believer in dialetical materialism as you are likely to find, but I also know that keeping an open mind doesn’t require that you open so wide that your brains fall out.

The bad-faith “balance” game is used by fraudsters and crooks to sow doubt. It’s how homeopaths, anti-vaxers, eugenicists, raw milk pushers and other members of the Paltrow-Industrial Complex played the BBC and other sober-sided media outlets, demanding that they be given airtime to rebut scientists’ careful, empirical claims with junk they made up on the spot.

This is not a harmless pastime. The pandemic revealed the high price of epistemological chaos, of replacing informed debate with cynical doubt. Argue with an anti-vaxer and you’ll soon realize that you don’t merely disagree on what’s true — you disagree on whether there is such a thing as truth, and, if there is, how it can be known.

Source: I quit. | Cory Doctorow | Medium

Information cannot be transmitted faster than the [vacuum] speed of light

It’s been a while since I studied Physics, so I confess to not exactly understanding what’s going on here. However, if it speeds up my internet connection at some point in the future, it’s all good.

"Our experiment shows that the generally held misconception that nothing can move faster than the speed of light, is wrong. Einstein's Theory of Relativity still stands, however, because it is still correct to say that information cannot be transmitted faster than the vacuum speed of light," said Dr. Lijun Wang. "We will continue to study the nature of light and hopefully it will provide us with a better insight about the natural world and further stimulate new thinking towards peaceful applications that will benefit all humanity."
Source: Laser pulse travels 300 times faster than light

You don't have to monetize your joy

A useful reminder.

Adam J. Kurtz, author of Things Are What You Make of Them has rewritten the maxim for modern creatives: “Do what you love and you’ll never work a day in your life work super fucking hard all the time with no separation or any boundaries and also take everything extremely personally.” Which, aside from being relatable to anyone who has tried to make money from something they truly care about, speaks to an underrepresented truth: those with passion careers can have just as much career anxiety as those who clock in and out of the mindless daily grind.

[…]

How did we get to the point where free time is so full of things we have to do that there’s no room for things we get to do? When did a beautiful handmade dress become a reminder of one’s inadequacies? Would the world really fall apart if, when I came home from a long day of work, instead of trying to figure out what I could conquer, I sat down and, I don’t know, tried my hand at watercolors? What if I sucked? What if it didn’t matter? What if that’s not the point?

Source: The Modern Trap of Feeling Obligated to Turn Hobbies Into Hustles | repeller

Peer review sucks

I don’t have much experience of peer review (I’ve only ever submitted one article and peer reviewed two) but it felt a bit archaic at the time. From what I hear from others, they feel the same.

The interesting thing from my perspective is that the whole edifice of the university system is slowly crumbling. Academics know that the system is ridiculous.

This then is why I was so bothered about how Covid-19 research is reported: peer review is no guard, is no gold standard, has little role beyond gate-keeping. It is noisy, biased, fickle. So pointing out that some piece of research has not been peer reviewed is meaningless: peer review has played no role in deciding what research was meaningful in the deep history of science; and played little role in deciding what research was meaningful in the ongoing story of Covid-19. The mere fact that news stories were written about the research decided it was meaningful: because it needed to be done. Viral genomes needed sequencing; vaccines needed developing; epidemiological models needed simulating. The reporting of Covid-19 research has shown us just how badly peer review needs peer reviewing. But, hey, you’ll have to take my word for it because, sorry, this essay is (not yet peer reviewed).
Source: The Absurdity of Peer Review | Elemental

How to recover from burnout

The World Health Organisation (WHO) defines occupational burnout as "feelings of energy depletion or exhaustion; increased mental distance from one’s job, or feelings of negativism or cynicism related to one's job; and reduced professional efficacy."

Based on that definition, I've experienced burnout twice, once in my twenties and once in my thirties. But what to do about it? And how can we prevent it?

I read a lot of Hacker News, including some of the 'Ask HN' threads. This one soliciting advice about burnout received what I considered to be a great response from one user.

Around August last year I just couldn't continue. I wasn't sleeping, I was frequently run down, and I was self-medicating more and more with drugs and alcohol. It eventually got to the point where simply opening my laptop would elicit a fight or flight response.

I was lucky enough to be in a secure enough financial situation to largely take 6 months off. If you're in a position to do this, I highly recommend it.

I uninstalled gmail, slack, etc. from my phone. I considered getting a dumb phone, but settled for turning off push notifications for everything instead. I went away with my girlfriend for a week and left all my tech at home except for my kindle (literally the first time I've been disconnected for more than a couple of days in probably 20 years). I exercised as much as possible and spent time in nature going for walks, etc.

I've been back at it part time for the last few months. Gradually I felt the feelings of burnout being replaced with feelings of boredom, which is hopefully my brain's way of saying that it's starting to repair itself and ready to slowly return to work.

I'm still nowhere near back to peak productivity, but I'm starting to come to terms with the fact that I may never get back there. I'm 36 and probably would have dropped dead of overwork by 50 if I kept up the tempo of the last 10 years anyway.

I'm not 'cured' by any means, but I believe things are slowly getting better.

My advice to you is to be kind and patient with yourself. Try not to stress about not having a side-project, and instead just focus on self-care for a while. Someone posted this on HN a few weeks back and it really hit close to home for me: http://www.robinhobb.com/blog/posts/38429

Source: Ask HN: Post Burnout Ideas | Hacker News

Portals to another world (or town)

I love this idea. I can think of many ways it could go wrong, but that’s not the point. There’s also lots of ways it could be awesome.

Vilnius, Lithuania, has installed a “portal” that allows residents to make contact in real time with the inhabitants of Lublin, Poland. Each city hosts a large circular screen and cameras by which residents can interact in real time via the Internet.
Source: Neighbors - Futility Closet

How to organise your fridge

My wife, who is one of the most organised people I know, is nevertheless what I would term a ‘fridge anarchist’. I like order, she puts anything anywhere. Lifehacker agrees with my way of doing things.

Store snacks, leftovers, and other items that get consumed quickly (that could also go bad quickly) on the top shelf. The middle shelves are for dairy, cheeses, cooked meats, and leftovers. The midsection tends to be on the cooler end, so store your milk and eggs here, and they’ll keep longer. If your milk doesn’t fit in the middle section, you can easily rearrange the shelving to accommodate your needs. Items that contain bacteria need to be kept separate to avoid cross-contamination—store these items on the last shelf. The bottom shelf is perfect for raw meat and fish, and should be wrapped or stored in sealed containers. The drawers are for your fruits and vegetables. (Though they can be too moist for mushrooms.)
Source: Organize Your Fridge Like You're a Goddamned Adult | Lifehacker

A cure for depression and boredom

I love this response to a letter about feeling bored and depressed. The answer is basically “welcome to the world” and that they’re never going to be happier by getting a better job or a bigger apartment.

That these are sad times and it feels bad to live in them is hardly insightful, but lately I’ve been wondering if it’s not so much the sadness but the sameness. Watching wicked people prosper over and over, having the same conversations about powerful men and the consequences they will never face, witnessing suffering that was easily anticipated and avoided, asking again and again what can be done about it and being told again and again, essentially, “nothing.” For a moment, early on in this present calamity, it felt like perhaps this could be a real rupture, but by now it’s clear our response will be more asking and more answering with “nothing,” more suffering, more pointless conversations, more prospering for a few of the expense of the rest.
Source: How Do I Figure Out What I Want When Every Day Feels the Same? | Jezebel

Killer robots are already here

Great.

Kargu is a “loitering” drone that uses machine learning-based object classification to select and engage targets, according to STM, and also has swarming capabilities to allow 20 drones to work together.

“The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,” the experts wrote in the report.

Source: Military drones may have attacked humans for first time without being instructed to, UN report says | The Independent

Nostalgia, friction, and read/write literacy 

I probably need to revisit this (and the references) but I really enjoyed reading Silvio Lorusso’s essay on computer agency and behaviour.

Alan Kay’s pioneering work on interfaces was guided by the idea that the computer should be a medium rather than a vehicle, its function not pre-established (like that of the car or the television) but reformulable by the user (like in the case of paper and clay). For Kay, the computer had to be a general-purpose device. He also elaborated a notion of computer literacy which would include the ability to read the content of a medium (the tools and materials generated by others) but also the ability to write in a medium. Writing on the computer medium would not only include the production of materials, but also of tools. That is for Kay authentic computer literacy: “In print writing, the tools you generate are rhetorical; they demonstrate and convince. In computer writing, the tools you generate are processes; they simulate and decide.”
Source: The User Condition, Silvio Lorusso

Interoperability for browser plugins

This is good news, especially as I’ve noticed recently a lot of developers of browser plugins just creating stuff for Chrome.

The WebExtensions Community Group has two goals:
  • Make extension creation easier for developers by specifying a consistent model and common core of functionality, APIs, and permissions.
  • Outline an architecture that enhances performance and is even more secure and resistant to abuse.
The group doesn't want to specify every aspect of the web extensions platform or stifle innovation. Each browser vendor will continue to operate independently with their own policies.
Source: Apple, Mozilla, Google, Microsoft form group to standardize browser plug-ins | AppleInsider

A robot that sticks to ceilings by... vibrating

Novelty, brains, and new experiences

We managed to get away for three nights last weekend, but I’m truly, deeply, looking forward to being able to do some of the amazing family trips we’ve done in previous years. Stupid coronavirus.

brain
The neuroscientist Dr. David Eagleman, who’s focused much of his research on time perception, discovered something fascinating about novel experiences: they make time pass by more slowly. In effect, this can make your life feel longer. Think, for instance, about summers when you were a kid versus summers now.

“The only time you really write down memories is when something is novel. For a child, at the end of a summer, they have lots of memories to draw on because so many things are new. The summer seems to have taken forever in retrospect,” Eagleman explained. “But once you’re an adult, you kind of know the rules of the world, so when you get to the end of the summertime, you think, Oh my gosh, where did that disappear to? Why? Because you don’t have any “footage” to draw on. You can’t really remember much in terms of distinguishable memories of the summer because everything else was pretty much routine.”

Source: The Brain-Changing Magic of New Experiences | GQ

Taking breaks to be more human

I have to say that I’m a bit sick of the narrative that we need time off / to recharge so we can be better workers. Instead, I’d prefer framing it as Jocelyn K. Glei does as asking yourself the question “who are you without the doing?”

The point isn’t just that it’s nice to goof off every so often — it’s that it’s necessary. And that’s true even if your ultimate goal is doing better work: Downtime allows the brain to make new connections and better decisions. Multiple studies have found that sustained mental attention without breaks is depleting, leading to inferior performance and decision-making.

In short, the prefrontal cortex — where goal-oriented and executive-function thinking goes on — can get worn down, potentially resulting in “decision fatigue.” A variety of research finds that even simple remedies like a walk in nature or a nap can replenish the brain and ultimately improve mental performance.

Source: How to Take a Break | The New York Times

The farmer uses his plough as his form of work

Someone mentioned this in passing and I looked it up and thought it was neat.

Example of Sator Square
The Sator Square (or Rotas Square) is a two-dimensional word square containing a five-word Latin palindrome. It features in early Christian as well as in magical contexts. The earliest example of the square dates from the ruins of Pompeii, which some scholars attribute to pre-Christian origins, such as Jewish or Mithraic.
Source: Sator Square | Wikipedia

Invisible sculptures are the logical conclusion of NFTs

Speechless.

According to Garau, the sculpture doesn't not exist per se, rather it exists in a vacuum, Newsweek reports. "The vacuum is nothing more than a space full of energy," Garau explained. "And even if we empty it and there is nothing left, according to the Heisenberg uncertainty principle, that 'nothing' has a weight. Therefore, it has energy that is condensed and transformed into particles, that is, into us."
Source: Italian Artist Sells Invisible Sculpture for $18,000 | highsnobiety

Virtual brands and ghost kitchens

This is the next step after ‘ghost kitchens’ — a multitude of virtual brands that basically offer the same thing but packaged differently. As the article explains, the step after this is inevitable: companies like Uber Eats cut out the middleman and open their own ghost kitchens and virtual brands.

Proponents of digital brands and ghost kitchens often pitch them as a way for chefs to experiment. When you don’t have to lease new space or hire new staff, it becomes less costly to try something new. At the same time, the availability of data about what works, platforms that algorithmically reward success with more success, and the way people search for generic products all create evolutionary pressure in the same direction. It’s a push-pull we’ve seen play out on other platforms. In theory, people are free to try weird things; in practice, most everyone makes wings.
Source: The Great Wings Rush | The Verge

Male bias in scientific trials

Wow, this excerpt from Pain & Prejudice is pretty hard-hitting, especially around the paternalistic tendency treating women as ‘walking wombs’.

In the early 20th century, the endocrine system, which produces hormones, was discovered. To medical minds, this represented another difference between men and women, overtaking the uterus as the primary perpetrator of all women’s ills. Still, medicine persisted with the belief that all other organs and functions would operate the same in men and women, so there was no need to study women. Conversely, researchers said that the menstrual cycle, and varied release of hormones throughout the cycle in rodents, introduced too many variables into a study, therefore females could not be studied.
Source: The female problem: how male bias in medical trials ruined women's health | Women | The Guardian

Degrees of Uncertainty

I rarely watch 24-minute online videos all the way through, but this is excellent and well worth everyone’s time. No matter what your preconceptions are about climate change, or your political persuasion.

[embed]www.youtube.com/watch

A data-driven documentary about Neil Halloran.
Source: Degrees of Uncertainty - A documentary about climate change and public trust in science by Neil Halloran

7 climate tipping points that could change the world forever

I usually share climate-related stuff over at extinction.fyi but this is too good (and scary) an article not to cross-post.

The particular danger, according to the Nature paper’s authors, is that even though change in a tipping element may happen slowly on a human timescale, once a certain threshold in the system is crossed, it can become unstoppable. This means that even if the planet’s temperature is stabilized, the transition of certain Earth systems from one state to another could pick up speed, like a rollercoaster car that’s already gone over the apex of a track.
Source: The 7 climate tipping points that could change the world forever | Grist

Screenshot culture

I’d love to see a longer article about this because discussing the role screenshots play in our increasingly-digitally-mediated culture is fascinating to me. Especially as they’re so easy to fake.

screenshots around an eye
But the most important trait of screenshots now is that they’re slippery: A personal exchange can become a meme or a weapon; a random moment can turn into a work of art or mutate into a callout. The alt-lit community—the internet’s short-lived literary movement—was founded by people who used screenshots of text messages, Gchat conversations, and Snapchats to make poems and digital art. It was later blown up by alleged sexual predators who were exposed via screenshots of their other messages, which circulated on Tumblr and Twitter. The rapper 50 Cent published text-message screenshots on Instagram in which he berated Randall Emmett, the husband of a Vanderpump Rules cast member, for being late on a debt payment, but no one remembers that original tough talk. They remember that Emmett wrote “I’m sorry fofty” over and over, inexplicably, a phrase that lives forever on Etsy—you can get it on a T-shirt, a tote, a wine glass, a onesie. (I received a sparkling im sorry fofty coaster for my birthday last year.)

These transformations lend a spectral quality to screenshots: Corry calls them the “evidentiary technique haunting the online realm.” Her recent paper examines the case of the former New York representative Anthony Weiner, who was humiliated by the leak of a lewd Twitter message in 2011, leading to his resignation from Congress. Two years later, more screenshots of more NSFW online messages leaked to the press, effectively ending his run for New York City mayor; and three years after that, it happened again, becoming an unexpected and wild tabloid story in the run-up to the 2016 election. (Weiner was later convicted of a felony for sending explicit messages to a 15-year-old, and served 18 months in prison.) Reporting on his downfall suggested that a lack of tech savvy played a role: If Weiner had known anything about anything, he would have come up with some better operational security. He was condemned for his predatory behavior, but also mocked for “not knowing how to use the internet,” Corry told me—a shame on top of a shame. How could you be so clueless as to not fear the ever-lurking screenshot?

Source: Screenshots, the Gremlins of the Internet - The Atlantic

The world's most popular websites, mapped

Years ago, iA had a map of the web which was much smaller and less intricate than this. My son had it up on his bedroom wall. The digital world is a lot more complex and a lot less English-speaking that it once was!

“As internet access has spread rapidly throughout developing countries in the last decade, the popularity of non-English websites has increased considerably—about a third of the world’s most visited 50 websites are based in China, with Tmall, QQ, Baidu, or Sohu surpassing Amazon, Yahoo, and even Facebook in terms of traffic,” Vargic says. “There is also a much larger [number] of popular Indonesian, Indian, Iranian, Brazilian, and other sites than even [a few] years ago.”
Source: Think you know the world's most popular websites? Think again | Fast Company

Sky pool awesomeness

Yes, I absolutely would swim across this.

Not for the faint of heart, this new Sky Pool at London's Embassy Gardens is 82-foot-long, 10-foot deep, and suspended 110 feet off the ground joining the tenth floor buildings together.
Source: Would Your Dare Swim Across this Sky Pool? | Moss and Fog

"Alexa, disable arbitration"

Companies add ‘binding arbitration’ to their terms and conditions because it usually means they have to pay out less money. However, Amazon had to change their terms last month after Amazon Echo users hoisted them by their own petard. Poetic justice.

Yet, this wasn't quite the "win" that Amazon wanted. Echo users have now brought more than 75,000 arbitration demands against the company, according to a Wall Street Journal report. Because Amazon’s previous terms said that the company would pay for arbitration filing fees, the retail giant was on the hook for tens of millions of dollars before a single case was heard. Amazon has now changed course.
Source: After 75,000 Echo arbitration demands, Amazon now lets you sue it | Ars Technica

Meetings as exercises in power

Meetings are one of the major ways in which power is demonstrated and exercised in hierarchical organisations. Trusting people and leaving them alone to get on with stuff is more productive, but work isn’t always about productivity (sadly).

Meeting abstention: Anyone invited to an internal meeting has the power to opt-out. “Send me the summary, please.” If someone abstains, they give up their ability to have a say in the meeting, but most meetings these days don’t actually give people a platform to have a say. And then that person can leave the Zoom room and get back to whatever it is they were doing that was actually productive.

Meeting nullification: If anyone in an internal meeting announces that the meeting is a pointless waste of time, it’s over. The meeting organizer is obligated to send everyone the memo that they probably should have sent in the first place.

Source: Meeting nullification | Seth’s Blog

Quitting instead of returning to the office

I’ve worked from home since 2012, and what was once unusual was becoming more normal even before the pandemic. Now that remote working has been proved to work, I can’t see why anyone (other than those who perhaps enjoy office politics and after-work drinks a little more than they should) would want to go back full-time…

While companies from Google to Ford Motor Co. and Citigroup Inc. have promised greater flexibility, many chief executives have publicly extolled the importance of being in offices. Some have lamented the perils of remote work, saying it diminishes collaboration and company culture. JPMorgan Chase & Co.’s Jamie Dimon said at a recent conference that it doesn’t work “for those who want to hustle.”

But legions of employees aren’t so sure. If anything, the past year has proved that lots of work can be done from anywhere, sans lengthy commutes on crowded trains or highways. Some people have moved. Others have lingering worries about the virus and vaccine-hesitant colleagues.

And for Twidt, there’s also the notion that some bosses, particularly those of a generation less familiar to remote work, are eager to regain tight control of their minions.

“They feel like we’re not working if they can’t see us,” she said. “It’s a boomer power-play.”

Source: Return to Office: Employees Are Quitting Instead of Giving Up Work From Home - Bloomberg

The End of Literary Criticism

Bizarrely enough, given where I grew up, my teenage years were spent reading all kinds of stuff that would probably be shelved under the title ‘literary criticism’, ‘hermeneutics’, or ‘apologetics’.

I don’t think that’s going away, but instead what’s changing is that books (and, more importantly the people who make, edit, and write them) are no longer seen as the gatekeepers to culture.

Complaining about the state of literary criticism in 2021 seems somewhat futile. First because literary critics have always been viewed as parasitic or, more damningly, irrelevant. Ever since there has been literature, there have been critics. And, ever since there have been critics, there have been writers, readers, and others accusing them of all manner of sins: jealousy, pettiness, poor reading, ad hominem attacks. In an epigraph to her 2016 book, Critics, Monsters, Fanatics, and Other Literary Essays, American novelist and critic Cynthia Ozick cites eighteenth-century poet Alexander Pope, who referred to “those monsters, Criticks!” But the bellyaching is also futile because, after years of being seen, in contemporary discourse, as highbrow irritants, professional critics are well on their way to becoming extinct. As Mark Davis puts it in a 2018 article in the Sydney Review of Books, “Traditional literary gatekeepers now live a kind of half-life; representatives of a zombie culture: the walking dead.”
Source: What We Lose When Literary Criticism Ends | The Walrus

Opportunity costs

While I appreciate the sentiment behind this article, I feel that the title is a bit off, and the solution a bit odd. Instead, I’d argue by sharing you work early and often, and in a way that people don’t need to have a meeting with you to discuss, you end up iterating towards better solutions.

The other thing is that, so long as you’re rigorous about working hours, workplace chat apps allow you to fix typos after you’ve sent messages. Always useful for people with ‘fat thumbs’ like me.

Unfortunately, time is a limited resource, which creates an opportunity cost. Opportunity costs are the name economists give to the things you could have been doing with a resource you spent in another way. The time you devote to a particular project could have been spent on countless other things on your to-do list, but you chose to spend them on that project.

And there is the rub.

Every project you do at work needs to be effective, but not every project needs to be perfect. An email you send to a close colleague at your level of the organization can be a partial sentence with typos in it and it will still elicit the desired response without damaging the relationship. A note to your boss might need to be written a little more carefully. A presentation to a potential new client had better be polished to a high gloss.

Source: You shouldn’t always give 110% | Fast Company

Anxiety and performance

I’ve recently had to re-evaluate my life and realise that, while there are others who see me as a confident, middle-aged man, that narrative doesn’t bear any kind of scrutiny. Instead, it’s liberating to realise that there is a kind of anxiety which is a two-edged sword; it can propel you forwards and hold you back, depending on how you treat it.

I’d assumed, in my simple two-plus-two way, that people who choose jobs like this found it easy, even enjoyed the thrill. I’m heartened to discover that they, too, feel frightened, their confidence an illusion. And I’m delighted that the shame associated with nervousness, a trait we’re expected to grow out of, has subsided enough for it to be discussed so openly. It’s no coincidence stage fright and its shivering sisters are being talked about now, at a time when even the most confident-seeming people are feeling nervous about re-entering the world.

The pandemic has helped clarify concepts that previously felt abstract. “Nervousness”, we see now, is not just a childish affectation but a rational reaction to situations that feel dangerous, a feeling experienced by many, and often. Similarly, we are being forced to reconsider the idea of “hope”. Rather than a simple heart-fluttering optimism, hope has been revealed to be both necessary and a bit of a slog. A decision, made daily upon waking, to seek out good news and drag ourselves towards it using our nails, our knees, whatever clawed instrument we have to hand. It prevents us from sinking so deep into the porridge of modern life that we no longer have the energy to look ahead.

Source: Feeling nervous isn’t bad – it happens to us all | Life and style | The Guardian

Twitter reactions

Twitter jumped the shark a while ago for me and I spend most of my time on the Fediverse these days. It’s an angry space. However, the reason I’m sharing this article because of the last sentence (which I’ve made bold). Ouch.

Twitter could be adding some new emojis to augment its formerly star-shaped, currently heart-shaped Like button, according to app researcher Jane Manchun Wong. The assets Wong found — which have been reliable predictions of future features in the past — show “cheer,” “hmm,” “sad,” and “haha” emoji reactions, though some currently only have a placeholder emoji.

Facebook has had a similar set of reactions since 2016. But Wong’s leak shows that Twitter could be taking a slightly different path when it comes to which moods it wants users to express: while it has laughing and sad expressions in common with Facebook, Twitter may also include a makes-you-think and cheer option. Twitter doesn’t seem to have the “angry” expression that Facebook does, but that may be because anger on Twitter is already handled by the reply and quote tweet functions.

Source: Twitter could be working on Facebook-style reactions - The Verge

Deepfake maps

There’s plenty to be concerned about in the world at the moment, and this just adds to the party. At a time when most of navigate by following a blue dot around a smartphone screen, we’re susceptible to manipulation on a number of fronts.

In a paper published online last month, University of Washington professor Bo Zhao employed AI techniques similar to those used to create so-called deepfakes to alter satellite images of several cities. Zhao and colleagues swapped features between images of Seattle and Beijing to show buildings where there are none in Seattle and to remove structures and replace them with greenery in Beijing.

Zhao used an algorithm called CycleGAN to manipulate satellite photos. The algorithm, developed by researchers at UC Berkeley, has been widely used for all sorts of image trickery. It trains an artificial neural network to recognize the key characteristics of certain images, such as a style of painting or the features on a particular type of map. Another algorithm then helps refine the performance of the first by trying to detect when an image has been manipulated.

Source: Deepfake Maps Could Really Mess With Your Sense of the World | WIRED

There's no such thing as a website or web app that doesn't need to be accessible

I feel like accessibility is where design used to be: something that’s ‘sprinkled’ on as an afterthought once an app has been created.

There’s no such thing as a website or web app that doesn’t need to be accessible. If you’re a web developer, accessibility is literally your job. If you ignore it, you’re just a hobbyist.
Source: Accessibility is hard. It's also your job. | Go Make Things

Net Zero Democracy

Needlessly written in ‘academese’ but this article nevertheless makes an important point about the kind of societies we need to foster in order to get to ‘net zero’ carbon (and beyond). In other words, not just the kind of focus-group fuelled politics we’ve been used to for the last 25 years, but… something different.

This image has an empty alt attribute; its file name is 290px-Conseil_Tenu_par_les_Rats-1.jpg
One of the foundations of modern party political technopopulism was the UK’s New Labour party of the 1990s. Tony Blair’s populist credentials are often overlooked but it was clear from the outset when he made the radical assertion that “New Labour was the political wing of the people as a whole”. This statement was an early signal of his aim to go beyond a partisan class-based politics and draw the sting from the ideological struggles of left and right. It speaks to a holistic view of the society. The epistemic configuration New Labour sought most often to draw on was that of the pollsters. In the Blaire, Mandelson and Campbell coalition we see the emergence of focus group-based politics that though primitive by today’s standards is much closer to the Dominik Cummings model of technopopulism than the techne of either Macron or 5Star.

The obvious predicament of the technopopulist paradigm is that for a variety of reasons very little of these offers contain any trace of the genuine cognitive empowerment we need to transform our polities into knowledge democracies? As leading scholar of deliberative democracy James Fishkin argued focus groups merely ask “..what we think when we don’t think..” That is why it is precisely in the field of experiments in deliberative participation through Citizens’ assemblies that we locate the possibilities transformation. When properly configured (as in the case of the recent climate assembly in France) they offer a far more powerful epistemic foundation.

Source: Net Zero Democracy - new tactical research

Human and computer memory

There are some good points made in this article about ‘desktop’ operating systems but it’s a bit Mac-centric for my liking. I’m pretty sure, for example, the author would love ChromeOS or another Linux-based operating system.

One really interesting point is the difference between human memory and computer memory. In my own life and experience, I use the latter to augment the former by not even trying to remember anything that computers can store and retrieve more quickly. Kind of like Cory Doctorow’s Memex Method.

Maciej Cegłowski’s powerful “The Internet With A Human Face” highlights the cognitive dissonance between human memory (gradiated and complex and eventually faulty) and computer memory (binary: flawless or nonexistent). We should model fragment search and access after human memory, using access patterns and usage patterns as rich metadata to help the computer understand what is important and what is relevant. And what is related to what. That doesn’t mean auto-deleting documents after some period of time, but just as it’s a lot harder to Google something generic that happened a decade ago and garnered little attention since, it doesn’t need to be “easy” to find the untitled scratch spreadsheet we cooked up to check the car payment budget in 2013 (but we should be able to find it if we need to).
Source: Why We Need to Rethink the Computer ‘Desktop’ as a Concept | by Ben Zotto | May, 2021 | OneZero

Professor goes to 'TikTok University'

This is a fun, yet slightly disturbing, look at mansion houses for influencers where people create TikTok videos. The author is a university professor, which makes his insights all the more interesting in terms of the contrast with where some of these young people would otherwise be - i.e. university.

For the past thirteen years, I’ve taught a course called Living in the Digital Age, which mobilizes the techniques of the humanities—critical thinking, moral contemplation, and information literacy—to interrogate the version of personhood that is being propagated by these social networks. Occasionally, there have been flashes of student insight that rivaled moments from Dead Poets Society—one time a student exclaimed, “Wait, so on social media, it’s almost like I’m the product”—but it increasingly feels like a Sisyphean task, given that I have them for three hours a week and the rest of the time they are marinating in the jacuzzi of personalized algorithms.
Source: [Letter from Los Angeles] The Anxiety of Influencers, By Barrett Swanson | Harper's Magazine

Social studying

I see a lot of music on Spotify and plenty of YouTube video related to studying. I didn’t realise the rabbit hole went much deeper.

The Study Web is a constellation of digital spaces and online communities—across YouTube, TikTok, Reddit, Discord, and Twitter—largely built by students for students. Videos under the #StudyTok hashtag have been viewed over half a billion times. One Discord server, Study Together, has over 120 thousand members. Study Web extends far past study groups composed of classmates, institution specific associations, or poorly designed retro forums discussing entrance requirements for professional programs. It includes but transcends Studyblrs on Tumblr that emerged in 2014 and eclipses various Reddit and Facebook study groups or inspirational images shared across Pinterest and Instagram. Populated mostly by Gen Z and the youngest of millennials, Study Web is the internet most of us don’t see, and it’s become a lifeline for students from junior high to college.
Source: Caught in the Study Web - Cybernaut - Every

Improving VO2max through blood protein analysis

My wife and I have recently bought new smartwatches (me: Garmin Venu 2, her: Fitbit Versa 3) and the things they tell us really makes a difference to how we exercise. What’s reported here looks like next level even to the detailed stats we’ve got already.

The team identified a set of 147 proteins that could indicate a person's VO2max, a marker of cardiorespiratory fitness, before the exercise program, and then a set of 102 proteins that could indicate the change in VO2max after it had been completed. Some of these proteins were also found to be linked to a higher risk of early death, highlighting a connection between cardiorespiratory fitness and long-term health.

“We identified proteins that emanate from bone, muscle, and blood vessels that are strongly related to cardiorespiratory fitness and had never been previously associated with exercise training responses,” says Gerszten.

Based in these revelations, the scientists developed what they call a protein score, which could be used to predict how much a person’s VO2max would change as a result of the exercise. Baseline levels of certain proteins were able to predict who would respond to the exercise with more reliability than established patient factors, according to the scientists, and also predicted which subjects would be unable to significantly improve their VO2max even after a sharp uptake in physical activity.

Source: Blood protein score might predict which exercise will benefit you most | New Atlas

3 ways to live a happier life

Useful reminders in this article from Arthur C. Brooks for The Atlantic that neophilia (openness to new experiences) is key to improving our happiness.

First, regularly interrogate your tastes, and run experiments. One common misconception is that our preferences are set in stone and there’s no use trying to change them—especially as we age and become grumpier about new things. The data don’t support this assumption. Indeed, some studies show that older workers are more open than their younger colleagues to changes in their job responsibilities. Meanwhile, our senses of taste and smell tend to dull as we age, making us more or less attracted to certain foods.

[...]

Second, make a point of choosing curiosity over comfort. Write up a list of new experiences and ideas you’ve yet to try, and explore one per week. They don’t have to be big things. Perhaps you never read fiction, not because you don’t like to but because you are more accustomed to biographies; pick up a novel. If you usually watch an old favorite movie instead of something new, or choose the same vacation spot every year, be sure to branch out.

Third, avoid the trap of newness for its own sake. If you’re pretty neophilic, you might already be taking the suggestions above, and reaping the rewards. But you might also be prone to restlessness and instability, and look to material novelty for a quick fix. In this case, try resetting your satisfaction with a “consumption fast”: Don’t buy anything inessential for two months. Your focus will likely migrate from online shopping to more satisfying pursuits.
Source: The Happiness Benefits of Trying New Things - The Atlantic

Maplessness

The academic paper that this blog post by Katie Carr is based on is also well worth a read - particularly around countering the e-s-c-a-p-e ideology (entitlement, surety, control, autonomy, progress, and exceptionalism).

Maps can be a useful tool, but are neither true to the complexity of any landscape, nor free from assumptions about how to engage with a landscape. They can create an illusion of safety through the sense of being in ‘chartered territory’. They condition us to take notice of certain features and ignore others. Road, footpaths, streams and boundaries are included, but not the smells, sounds, and emotional responses to a landscape. They focus on unchanging landscape features, not the seasonal migration of birds, changing colours, or the life and death that inhabits every place. Although a map is never the territory, and a model not the reality, the implicit suggestion of both maps and models is that to map is to measure and name in order to know, and that to know is to control. The trend towards ever greater mapping and detailed measuring of our infinitely complex and changing world reflects the aim, since the Enlightenment, to attain a sense of safety through protecting ourselves from the mysterious. And the history of cartography is insidiously entangled with colonialism and global injustice. The mapping impulse is therefore an expression of what the DA initiator Jem Bendell has called the ideology of e-s-c-a-p-e. Likewise, the emphasis on carrying out ever more detailed research and analysis as a response to growing evidence of the catastrophe unfolding around us can be seen as a habit – even an addiction – for coping with feelings of extreme vulnerability.
Source: The urgent need to slow down: ‘maplessness’ for responding to collapse – Professor Jem Bendell

Digital fashion is another example of a nascent industry beset with inequalities

As the conclusion to this article states, if digital fashion industry doesn’t differentiate itself from IRL fashion now, it’s storing up problems for the future.

If one of the main arguments in support of digital fashion is its ability to serve the marginalized, what happens when its development is in the hands of those with overwhelmingly socio-economically privileged backgrounds? The Institute of Digital Fashion (IoDF), a digital fashion studio and retailer, weighed in on why these issues are major obstacles to the healthy advancement of the industry in an online interview. “The industry’s biggest challenges are the current traps of the IRL fashion industry. In brief, if we mirror these, we are lost!” its founders state. Recognizing these issues, founders Cattytay and Leanne Elliott Young are taking steps to help it develop on a socially conscious path.
Source: Everyone from Gucci to Louis Vuitton is betting on digital fashion. Here’s why they should proceed with caution | Fast Company

Sky explosion

The impact of decision fatigue

I remember reading that Barack Obama only had two colours of suits while President of the USA, because making lots of small decisions inhibited his ability to make larger/more important ones.

Decision fatigue impacts our ability to choose between several options, causes us to make impulse purchases, and can even lead us to avoid decisions entirely:
  1. Impaired ability to make trade-offs. Trade-offs feature several choices that have positive and negative elements. They are a particularly energy-consuming form of decision making. When we are faced with too many trade-offs to consider, we end up mentally depleted, and we make poor choices.
  2. Impulse purchases. When shopping, decisions regarding prices and promotions can produce decision fatigue, depleting our willpower to impulse purchases. This is why snacks are usually displayed near the cash register: by the time they get there, many shoppers have decision fatigue and may grab an item they hadn’t initially intended on buying.
  3. Decision avoidance. Sometimes, our mental energy is so depleted, we completely avoid making a choice. We may as well try to bypass the mental and emotional costs of decision making by selecting the default option when one is available.
Interestingly, poorer people are more prone to decision fatigue. “If a trip to the supermarket induces more decision fatigue in the poor than in the rich — because each purchase requires more mental trade-offs — by the time they reach the cash register, they’ll have less willpower left to resist the Mars bars and Skittles. Not for nothing are these items called impulse purchases,” explains Dean Spears from Princeton University.
Source: Decision fatigue: how a burden of choices leads to irrational trade-offs

Wherefore art thou, privacy?

As John Naughton points out, if Apple are the only Big Tech company truly interested in preserving our privacy, we should be worried.

So here’s where we are: an online system has been running wild for years, generating billions in profits for its participants. We have evidence of its illegitimacy and a powerful law on the statute book that in principle could bring it under control, but which we appear unable to enforce. And the only body that has, to date, been able to exert real control over the aforementioned racket is… a giant private company that itself is subject to serious concerns about its monopolistic behaviour. And the question for today: where is democracy in all this? You only have to ask to know the answer.
Source: If Apple is the only organisation capable of defending our privacy, it really is time to worry | John Naughton | The Guardian

Badges everywhere!

As I predicted, 2021 is the year when Open Badges and digital credentials go mainstream. It’s unsurprising that ‘open’ isn’t front-and-centre in this Blackboard press release, but it’s still a win that this kind of thing is becoming normalised.

"We're excited to collaborate with Blackboard to integrate Badgr's stackable digital credentialing technology into Blackboard Learn," said Wayne Skipper, Founder of Concentric Sky. "Verifiable, skill-aligned micro-credentials are fast becoming the currency by which learners and employers improve the connections between learning outcomes and employment opportunities."Badgr Spaces, first available in Blackboard Learn, enables learners to earn personalized digital credentials and instructors to align course objectives and learning pathways with digital badges. Badgr Spaces empowers every member of a learning community with insight, direction and recognition on their personalized learning path.
Source: Blackboard and Concentric Sky Partner to Make Badgr Micro-credentials and Stackable Pathways Available to More Learners

GCHQ violates our privacy

Hardly surprising, but it’s important people are still pushing on this eight years(!) after the Snowden revelations. It’s incredible to me how The Guardian and other outlets can reveal this kind of thing along with the financial corruption set out in the Panama Papers and so little changes as a result.

In Tuesday’s ruling, which confirmed elements of a lower court’s 2018 judgment, the judges said they had identified three “fundamental deficiencies” in the regime. They were that bulk interception had been authorised by the secretary of state, and not by a body independent of the executive; that categories of search terms defining the kinds of communications that would become liable for examination had not been included in the application for a warrant; and that search terms linked to an individual (that is to say specific identifiers such as an email address) had not been subject to prior internal authorisation.
Source: GCHQ’s mass data interception violated right to privacy, court rules | GCHQ | The Guardian

Rat Race 2.0

An insightful post which considers the ways in which current working generations can’t “quit the rat race” in the way previous generations could (or could aspire to doing). You’re either plugged into the network, or you die.

The internet matching machine is fuelled by content. The more of it you produce, the more likely you are to reach the people who'd value what you have to offer. Writing a tweet or uploading a video costs nothing. It might be embarrassing or a waste of time, but that’s about it. In that sense, the downside of playing the game is indeed limited.

But focusing on the risks within the game obscures a much bigger problem: The game is no longer optional. Everyone must play. We have little to lose because we already lost everything: Stable jobs, affordable homes, education that lasts a lifetime, and worry-free retirement are no longer an option. Even money itself ain’t what it used to be. It loses value by simply sitting in the bank.

This is partly a result of various policy failures. But ultimately, it is due to our current stage of technological development. Information moves around and knowledge becomes obsolete faster than ever. Geographical constraints no longer protect the average from the best.

We are all in one giant global arena. We can win world-scale prizes. But we have to play. And even when we win, the rewards tend to be fleeting: they can sustain us for a while, but at any moment, the algorithms might change, or another clever fellow can whisk our followers-customers away. We are as anxious in victory as we are in defeat, and our winnings can only be used to continue to play.

Source: No Floor, No Ceiling

Volcano-powered electricity

Having visited Iceland in December 2019, just before the pandemic hit, I’ve seen these geothermal plants scattered around the landscape. In addition, there are places where fruit and vegetables are grown right over geothermal vents. Awesome.

The Icelandic Deep Drilling Project, IDDP, has been drilling shafts up to 5km deep in an attempt to harness the heat in the volcanic bedrock far below the surface of Iceland.But in 2009 their borehole at Krafla, northeast Iceland, reached only 2,100m deep before unexpectedly striking a pocket of magma intruding into the Earth’s upper crust from below, at searing temperatures of 900-1000°C.This borehole, IDDP-1, was the first in a series of wells drilled by the IDDP in Iceland looking for usable geothermal resources. The special report in this month’s Geothermics journal details the engineering feats and scientific results that came from the decision not to the plug the hole with concrete, as in a previous case in Hawaii in 2007, but instead attempt to harness the incredible geothermal heat.
Source: Drilling surprise opens door to volcano-powered electricity

A web-based commonplace book

It’s always great to hear Cory read his own work as he’s such an engaging speaker. This is a particularly interesting example, however, as it meshes so well with my experience of writing in the open for the last 15+ years.

This week on my podcast, my inaugural column for Medium, The Memex Method, a reflection on 20 years of blogging, and how it has affected my writing.
Source: The Memex Method | Cory Doctorow's craphound.com

Mastering a 5,400-character typewriter

I can’t even imagine how difficult this must have been to type on!

The IBM Chinese typewriter was a formidable machine—not something just anyone could handle with the aplomb of the young typist in the film. On the keyboard affixed to the hulking, gunmetal gray chassis, 36 keys were divided into four banks: 0 through 5; 0 through 9; 0 through 9; and 0 through 9. With just these 36 keys, the machine was capable of producing up to 5,400 Chinese characters in all, wielding a language that was infinitely more difficult to mechanize than English or other Western writing systems.

To type a Chinese character, one depressed a total of 4 keys—one from each bank—more or less simultaneously, compared by one observer to playing a chord on the piano. Just as the film explained, “if you want to type word number 4862 you would press 4-8-6-2 and the machine would type the right character.⁠”

Each four-digit code corresponded with a character etched on a revolving drum inside the typewriter. Spinning continuously at a speed of 60 revolutions per minute, or once per second, the drum measured 7 inches in diameter, and 11 inches in length. Its surface was etched with 5,400 Chinese characters,⁠ letters of the English alphabet, punctuation marks, numerals, and a handful of other symbols.

Source: How Lois Lew mastered IBM’s 1940s Chinese typewriter

Working from near home

The idea of subsidizing W.F.N.H. efforts is not novel. Last fall, a startup in the U.K. called Flown began developing what it describes as an Airbnb for undistracted knowledge work. The company’s home page features enviable locations, such as a room in the Cotswolds with a desk facing a floor-to-ceiling picture window overlooking a meadow, available for short-term rent. As the founder of Flown, Alicia Navarro, explained to me, when we talked on the phone, the target for these rentals is not individuals but large organizations that can buy time in bulk to support their employees.
Source: What if Remote Work Didn’t Mean Working from Home? | The New Yorker

Life should contain novelty

Life should contain novelty - experiences you haven't encountered before, preferably teaching you something you didn't already know.  If there isn't a sufficient supply of novelty (relative to the speed at which you generalize), you'll get bored.  (Complex Novelty.)
Source: 31 Laws of Fun - LessWrong

'The individual' is an idea like other ideas

Blue sky through dark clouds

I thought I'd share some things that have really opened my eyes recently.

The first is a two-part interview with Vinay Gupta from the Emerge podcast in 2019. I've followed Vinay's work ever since we tried to get Firecloud (a P2P publishing platforming using WebRTC) off the ground in 2013 when I was working at Mozilla. Ten years ahead of the curve, as always.

Working with Vinay absolutely blew my mind, and although we haven't met up in person for a few years, he's been changing the world in the meantime. He was the release manager for Ethereum, and he's currently CEO of Mattereum.

The difference with Vinay, though, is that he's enlightened. I don't mean that in a LinkedIn kind of way. I mean that in a studied-under-a-Hindu-guru kind of way. This underpins all of the humanitarian work he does, some of which you can see at myhopeforthe.world

The two episodes on the Emerge podcast are entitled Waking Up in the Monster Factory (Part 1 / Part 2). I guarantee they are worth your time.


The second thing I'd like to share is a documentary series by Adam Curtis that was released last month. Entitled Can't Get You Out of My Head: An Emotional History of the Modern World it's available on BBC iPlayer and YouTube.

The late, great Dai Barnes implored me to watch Curtis' 2016 documentary HyperNormalisation. I'm only half way through the new documentary series, and it's having a similar effect as when I watched that. A feeling of waking up, and seeing the world as it really is. It's kind of counter-conspiracy theory.


The crucial thing for me, and my reason for sharing both of these, is a recognition that there's no-one coming to save us. But, unlike those people discussed in the 99% Invisible podcast episode The Doom Boom, it's up to us to figure out how to pull together collectively — instead of hunkering down and just making sure that our immediate family and friends are OK.


Quotation-as-title by Harold Rosenberg. Image by Antonino Visalli

Of all lies, art is the least untrue

Nyan cat

The world doesn't particularly need my opinions on NFTs ('non-fungible tokens') as there's plenty of opinions to go round in other newsletters, podcasts, and blog posts.

After doing a bunch of reading, though, I think that the main use case for NFTs will be ticket sales. That is to say, when there is a limited supply of something with intrinsic value, and both the original buyer and seller want to ensure authenticity.

The rest is speculation and gambling, as far as I'm concerned, with a side serving of ecological destruction. I'm also a bit concerned about the enforcement of copyright everywhere on the web it might lead to...


Twitter's Dorsey auctions first ever tweet as digital memorabilia — "The post, sent from Dorsey’s account in March of 2006, received offers on Friday that went as high as $88,888.88 within minutes of the Twitter co-founder tweeting a link to the listing on ‘Valuables by Cent’ - a tweets marketplace."

NFTs, explained — “Non-fungible” more or less means that it’s unique and can’t be replaced with something else. For example, a bitcoin is fungible — trade one for another bitcoin, and you’ll have exactly the same thing. A one-of-a-kind trading card, however, is non-fungible. If you traded it for a different card, you’d have something completely different. You gave up a Squirtle, and got a 1909 T206 Honus Wagner, which StadiumTalk calls “the Mona Lisa of baseball cards.” (I’ll take their word for it.)"

NFTs are a dangerous trap — "The more time and passion that creators devote to chasing the NFT, the more time they’ll spend trying to create the appearance of scarcity and hustling people to believe that the tokens will go up in value. They’ll become promoters of digital tokens more than they are creators. Because that’s the only reason that someone is likely to buy one–like a stock, they hope it will go up in value. Unlike some stocks, it doesn’t pay dividends or come with any other rights. And unlike actual works of art, NFTs aren’t usually aesthetically beautiful on their own, they simply represent something that is."

Cryptodamages: Monetary value estimates of the air pollution and human health impacts of cryptocurrency mining — "Results indicate that in 2018, each $1 of Bitcoin value created was responsible for $0.49 in health and climate damages in the US and $0.37 in China. The similar value in China relative to the US occurs despite the extremely large disparity between the value of a statistical life estimate for the US relative to that of China. Further, with each cryptocurrency, the rising electricity requirements to produce a single coin can lead to an almost inevitable cliff of negative net social benefits, absent perpetual price increases."

HERE IS THE ARTICLE YOU CAN SEND TO PEOPLE WHEN THEY SAY “BUT THE ENVIRONMENTAL ISSUES WITH CRYPTOART WILL BE SOLVED SOON, RIGHT?” — "Much like the world of blue chip, some NFTs may be bought and sold simply as artworks, intended for personal collections and acquired for aesthetic, conceptual, or personal reasons. However, every single one is made from the outset to be liquidated- an asset first, artwork second. They are images attached to dollar figures, not the other way around."


Quotation-as-title by Gustave Flaubert. Image of Nyan Cat, a 2011 meme, which sold as an NFT for ~$600,000 recently.

One should always be a little improbable

Object hitting and bending a wall

🍲 Introducing ‘Food Grammar,’ the Unspoken Rules of Every Cuisine — "Grammars can even impose what is considered a food and what isn’t: Horse and rabbit are food for the French but not for the English; insects are food in Mexico but not in Spain. Moreover, just as “Hey, man!” is a friendly greeting for a buddy but maybe not for your boss, foods may not be suitable in all grammatical contexts. “A Frenchman would think it odd to drink white coffee with dinner and an Italian probably would resent being served spaghetti for breakfast,” writes Claude Fischler in “Food, Self and Identity.” By the same token, rice is appropriate for breakfast in Korea but not in Ireland."

The essence of this article is that food is a reflection of culture, and our views of other cultures can become ossified. A good read.


🌍 Scientists begin building highly accurate digital twin of our planet — "The digital twin of the Earth is intended to be an information system that develops and tests scenarios that show more sustainable development and thus better inform policies. "If you are planning a two-​metre high dike in The Netherlands, for example, I can run through the data in my digital twin and check whether the dike will in all likelihood still protect against expected extreme events in 2050," says Peter Bauer, deputy director for Research at the European Centre for Medium-​Range Weather Forecasts (ECMWF) and co-​initiator of Destination Earth. The digital twin will also be used for strategic planning of fresh water and food supplies or wind farms and solar plants."

This is the kind of thing that simultaneously fills me with hope and fear. On the one hand, such a great idea; on the other, if we get the model wrong, it could make things worse...


🤑 Why an Animated Flying Cat With a Pop-Tart Body Sold for Almost $600,000 — "The sale was a new high point in a fast-growing market for ownership rights to digital art, ephemera and media called NFTs, or “nonfungible tokens.” The buyers are usually not acquiring copyrights, trademarks or even the sole ownership of whatever it is they purchase. They’re buying bragging rights and the knowledge that their copy is the “authentic” one."

I've got a blog post percolating in my mind at the moment about digital reserve currencies, NFTs and deepfakes. There's something here about an emerging hyper-capitalist dystopia, for sure.


Quotation-as-title by Oscar Wilde. Image by Tu Tram Pham.

Life is a great bundle of little things

As I'm catching up with news from various sources and bookmarking articles to come back and share via Thought Shrapnel, I also come across interesting tools and resources.

Here are some of them that I thought were interesting enough to share.

ArchiveWeb.page is "the latest tool from Webrecorder to turn your browser into a full-featured interactive web archiving system!"

Bookfeed.io is "a simple tool that allows you to specify a list of authors, and generates an RSS feed with each author’s most recently released book."

Loudreader is "the world's only ebook reader that can open .azw3 [and] .mobi files in a browser!"

NES.css is "a NES style (8bit-like) CSS framework." (also see Simple.css)

novelWriter is "a markdown-like text editor designed for writing novels and larger projects of many smaller plain text documents."

Open Peeps is a hand-drawn illustration library. "You can use Open Peeps in product illustration, marketing imagery, comics, product states, user flows, personas, storyboarding, invitations for your quinceañera...or anything else not on this list."

Pattern Generator provides you with a way to "create unique, seamless, royalty-free patterns".

Same Energy is "a visual search engine. You can use it to find beautiful art, photography, decoration ideas, or anything else."

Screenstab allows you to "cut down on time and effort by auto-generating appealing graphics for marketing materials, social media posts, illustrations & presentation slides."


Quotation-as-title by Oliver Wendell Holmes. Image by Jessica Lee.

Criticism, like lightning, strikes the highest peaks

🙏 Blogging as a forgiving medium — "The ability to “move it around for a long time” is what I’m looking for in a writing medium — I want words and images to be movable, I want to switch them out, copy and cut and paste them, let them mutate. "

I love the few minutes after I press publish on a post, which feels like a race against time between me and the first readers of it. Who will spot the typos and grammatical errors first?


📝 Open working blog and weeknotes templates — "We wrote a guide on how to write weeknotes for Catalyst projects. It is based on Sam Villis’ guide and the templates here are based on Sam’s guide too."

This is useful, especially if you're not blogging yet (or haven't for a while!)


How to be more productive without forcing yourself — "Basically, if you’re addicted to any of the high-dopamine, low-effort activity, please quit it. At least temporarily so you can reestablish a healthy relationship to work. The more experienced we’re about the topic, the more obvious this is. There is no other way than to temporarily quit the addiction."

I like the practical advice in this article. Too many people do stuff that's too low-value, thus squandering their talent and ability to take on more important stuff.


🤔 Objective or Biased — "This type of analysis software is not widely used in recruiting in Germany and Europe right now. However, large companies are definitely interested in the technology, as we learn during off-the-record conversations. What seems to be attractive: A shorter application process which can save a lot of resources and money."

This is kind of laughable and serious at the same time. I've felt the pain of hiring but, as this research shows, automating the hard parts doesn't lead to awesome results.


📱 Contact-tracing apps were the biggest tech failure of the COVID-19 pandemic — "The system itself, on a technical level, is the root of the problem. In an effort to provide something that could be used universally, while also protecting users’ privacy, Google and Apple came up with a system that was doomed to be useless."

My concern here is that the fault for the failure will be placed at the door of privacy activists.


Quotation-as-title by Baltasar Gracián. Images by Vera Shimunia, Russian textile artist via #WOMENSART

Unless one is a genius, it is best to aim at being intelligible

Can on rotary phone. Everything is pink.

👯‍♀️ Secrets of the VIP Party: Why the 1% Love ‘Ritualised Waste’ — "Post-pandemic, in a broader sense, you glimpsed an immediate reckoning and disgust with ostentatious displays of wealth in the context of COVID-19. We saw some instances where people would make statements like ‘we’re all in this together’, while broadcasting from their luxury yacht or private island, followed by a backlash. I think they’ve quickly learned not to do that since…"

This is an incredible read: an interview with a former model turned sociology professor.


💳 Germany To Let Citizens Store ID Cards On Smartphone — "The Interior Ministry said Wednesday that from this fall, citizens will be able to use the electronic ID stored in their smartphones together with a PIN number to prove they are who they claim to be when communicating with authorities or private businesses."

It's Germany, so I'm sure they'll do this sensibly, but it's incredible to think how quickly smartphones have become an essential part of our everyday life.


🏛️ 'A very dangerous epoch': historians try to make sense of Covid — "It is not just the Covid pandemic that can make these feel like unusually significant times. Populism, Trump’s rise and (perhaps) fall, Brexit, the Black Lives Matter and #MeToo protests, mass movement of refugees, the increased might of both China and India and many other issues have contributed to a sense of humanity having reached a historic moment, all while the climate crisis rages with ever more urgency."

People always think they're living through unprecedented times. But in our case, we probably are.


🚸 Why there's no such thing as lost learning — "The fact is that we – as a community of politicians, teachers and education experts – decide what any child must know, understand or be able to do at each age, not some natural law of learning. Why should a child know the structure of a cell membrane by the age of 16? I couldn’t know that information at 16 because it had not yet been fully discovered and described. But I learned it at a later stage."

This is a useful post to point people towards, as the author does a great job of pointing out the ridiculousness of putting an arbitrary body of knowledge before the well-being of young people.


👑 Should Elizabeth II be Elizabeth the Last? At least allow Britain a debate — "But none of [these revelations] reflect the real damage the monarchy inflicts on us. It’s not their money nor their abuse of power, but their very existence that ambushes and infantilises the public imagination, making us their subjects in mind and spirit."

My views on privilege hardly need rehearsing here, but suffice to say that one of the main problems with our tiny island is the delusions of grandeur we have through outdated institutions such as the monarchy.


Quotation-as-title by Anthony Hope. Image by Tyler Nix.

It would not be better if things happened to men just as they wish

🕸️ A plan to redesign the internet could make apps that no one controls ⁠— "Rewinding the internet is not about nostalgia. The dominance of a few companies, and the ad-tech industry that supports them, has distorted the way we communicate—pulling public discourse into a gravity well of hate speech and misinformation—and upended basic norms of privacy. There are few places online beyond the reach of these tech giants, and few apps or services that thrive outside of their ecosystems."

It is, inevitably, focused on crypto tokens, which provide an economic incentive. If only there was a way to fix things that didn't seem to be driven by making the inventors obscenely rich?


🤯 Can’t Get You Out of My Head review – Adam Curtis's 'emotional history' is dazzling — "Whether you are convinced or not by the working hypothesis, Can’t Get You Out of My Head is a rush. It is vanishingly rare to be confronted by work so dense, so widely searching and ambitious in scope, so intelligent and respectful of the audience’s intelligence, too. It is rare, also, to watch a project over which one person has evidently been given complete creative freedom and control without any sense of self-indulgence creeping in."

Adam Curtis' documentary 'Hypernormalisation' blew my mind, and I'm already enjoying the first of these six hour-long documentaries.


💸 Why Mastercard is bringing crypto onto its network — "We are preparing right now for the future of crypto and payments, announcing that this year Mastercard will start supporting select cryptocurrencies directly on our network. This is a big change that will require a lot of work. We will be very thoughtful about which assets we support based on our principles for digital currencies, which focus on consumer protections and compliance."

Companies like Mastercard haven't got much of a choice here: they have to either get with the program or risk being replaced. Hopefully it will help simplify what is a confusing picture at the moment. I've had problems recently withdrawing money from cryptocurrency exchanges to my bank accounts.


👉 Hovering over decline and clicking accept — "There's so much written about self-care. And much of it starts from a good place but falls apart the moment things get hectic. But this idea of Past You working in service of Future You isn't a one-off. It's not a massage you sneak in one Friday morning. The secret hope that 60 minutes of hot rocks will counteract 12 hours a day hunched over a laptop."

Some good advice in here from the Nightingales, whose book is also worth a read.


👨‍💻 Praxis and the Indieweb — "If a movement has at its core a significant barrier to entry, then it is always exclusionary. While we’ve already seen that the movement has barriers at ability and personality, it is also true that, as of 2021, there is a significant barrier in terms of monetary resources."

As I said a year ago in this microcast, I have issues with the IndieWeb and why I'm more of a fan of decentralisation through federation.


Quotation-as-title by Heraclitus. Image by Saad Chaudhry.

Taste ripens at the expense of happiness

Oranges growing on a tree

🧐 Habits, Data, and Things That Go Bump in the Night: Microsoft for Education ⁠— "Microsoft’s ubiquity, however, is sometimes mistaken for banality. Because it is everywhere, because we have all used it forever, we assume we can trust it."

I haven't voluntarily used something made by Microsoft (as opposed to acquired by it) for... about 20 years?


You Can Set Screen-Time Rules That Don’t Ruin Your Kids’ Lives — "Bear in mind that the limits you set need not be a specific number of minutes. Try to think of other, more natural ways of breaking up their activities. Maybe your kids play one game before tackling homework. Also, consider granting them one day per weekend with fewer restrictions on screen-time socializing. Giving them more autonomy over their weekends helps approximate the fun and flexibility of their pre-COVID world, and lets them unwind and hang out more with their friends."

This has been really hard to managed as a parent, and it's easy to think that you're always doing it wrong.


💬 Why do we keep on telling others what to do? — "Usually starting a conversation out with telling people what you feel they are doing wrong is going to make it a negative conversation all in all, and I tend to believe that it's better to follow “the campfire rule”, try to make all people taking part in a conversation end up a bit better off than what they were when they started the conversation, and telling people what to do or what not to are going straight against this."

Post-therapy, I'm much better at focusing on changing myself than trying to change others. I'd recommend therapy, but that might be construed as an implicit instruction...


🙌 Twitter Considers Subscription Fee for Tweetdeck, Unique Content — "To explore potential options outside ad sales, a number of Twitter teams are researching subscription offerings, including one using the code name “Rogue One,” according to people familiar with the effort. At least one idea being considered is related to “tipping,” or the ability for users to pay the people they follow for exclusive content, said the people, who asked not to be named because the discussions are internal. Other possible ways to generate recurring revenue include charging for the use of services like Tweetdeck or advanced user features like “undo send” or profile-customization options."

This is fantastic news. It would destroy Twitter as it currently stands, but that's fine as it's much worse than it was a decade ago.


🔒 Do lockdowns work? — "It's absurd thinking, but the sceptics have finally found an argument that cannot be categorically disproved. Lockdowns have a scientific rational: you can't transmit a virus to people you don't meet. Contrary to what Toby says in his article, they also have historic precedents: during the Spanish Flu, cities such as Philadelphia closed shops, churches, schools, bars and restaurants by law (they also made face masks mandatory). And now we have numerous natural experiments from around the world showing that infection rates fall when lockdowns are introduced."

There will always be idiots who try and use their influence and eloquence to ensure they're heard. Thankfully, there are people like this who can dismantle their arguments brick-by-brick.


Quotation-as-title by Jules Renard. Image. by Elena Mozhvilo.

Continuous eloquence is tedious

Corner of a high-rise building

🏭 Ukraine plans huge cryptocurrency mining data centers next to nuclear power plants — "Ukraine's Energoatom followed up [the May 2020] deal with another partnership in October. The state enterprise announced an MoU with Dutch mining company Bitfury to operate multiple data centers near its four nuclear power plants, with a total mining consumption of 2GW."

It's already impossible to buy graphics cards, due to their GPUs being perfect for crypto mining. That fact doesn't seem like it's going to be resolved anytime soon.


😔 The unbearable banality of Jeff Bezos — "To put it in Freudian terms, we are talking about the triumph of the consumerist id over the ethical superego. Bezos is a kind of managerial Mephistopheles for our time, who will guarantee you a life of worldly customer ecstasy as long as you avert your eyes from the iniquities being carried out in your name."

I've started buying less stuff from Amazon; even just removing the app from my phone has made them treat me as just another online shop. I also switched a few years ago from a Kindle to a ePub-based e-reader.


📱 The great unbundling — "Covid brought shock and a lot of broken habits to tech, but mostly, it accelerates everything that was already changing. 20 trillion dollars of retail, brands, TV and advertising is being overturned, and software is remaking everything from cars to pharma. Meanwhile, China has more smartphone users than Europe and the USA combined, and India is close behind - technology and innovation will be much more widely spread. For that and lots of other reasons, tech is becoming a regulated industry, but if we step over the slogans, what does that actually mean? Tech is entering its second 50 years."

This is a really interesting presentation (and slide deck). It's been interesting watching Evans build this iteratively over the last few weeks, as he's been sharing his progress on Twitter.


🗯️ The Coup We Are Not Talking About — "In an information civilization, societies are defined by questions of knowledge — how it is distributed, the authority that governs its distribution and the power that protects that authority. Who knows? Who decides who knows? Who decides who decides who knows? Surveillance capitalists now hold the answers to each question, though we never elected them to govern. This is the essence of the epistemic coup. They claim the authority to decide who knows by asserting ownership rights over our personal information and defend that authority with the power to control critical information systems and infrastructures."

Zuboff is an interesting character, and her book on surveillance capitalism is a classic. This might article be a little overblown, but it's still an important subject for discussion.


☀️ Who Built the Egyptian Pyramids? Not Slaves — "So why do so many people think the Egyptian pyramids were built by slaves? The Greek historian Herodotus seems to have been the first to suggest that was the case. Herodotus has sometimes been called the “father of history.” Other times he's been dubbed the “father of lies.” He claimed to have toured Egypt and wrote that the pyramids were built by slaves. But Herodotus actually lived thousands of years after the fact."

It's always good to challenge our assumptions, and, perhaps more importantly, analyse why we came to hold them in the first place.


Quotation-as-title by Blaise Pascal. Image by Victor Forgacs.

When we ask for advice we are usually looking for an accomplice

Changing the Letter, 1908, by Joseph Edward Southall. The subject is taken from the poem 'The Man Born to be King' from William Morris's 'The Earthly Paradise'. The sealed letter is addressed 'To The Governor'

🏡 What can we learn from the great working-from-home experiment? — "A few knowledge jobs, such as IT support, are properly systematised to allow focused work without endless ad hoc emails. [Cal] Newport believes that others will follow once we all wise up. Or we may find that certain kinds of knowledge work are too unruly to systematise. Improvisation will remain the only mode of working — and, for that, face-to-face contact seems essential."

I disagree with this, having spent almost a decade doing creative, improvisational work, mostly from my home office.


They left Mozilla to make the internet better. Now they’re spreading its gospel for a new generation. — "Plenty of older tech companies spawned networks of industry leaders. Mozilla has, too, only it's a different kind of group: a collection of values-driven engineers, marketers, program managers and founders. Most of them share a common story: Looking for a sense of purpose in tech, they took a financial hit for the chance to become part of the company's cult-like obsession with openness and privacy. Though the company had its flaws, they left feeling deep loyalty to the mission, and a sense of betrayal from those who went on to work for the tech giants Mozilla has been battling. "

Some companies act as a filter for a certain type of person. Mozilla is like that, and while I was there I worked with some of the most ethical and awesome people I've ever come across.


🤪 Why It’s Usually Crazier Than You Expect — "The idea that people like (or hate) what other people like (or hate) is important, because it lets small ideas grow bigger than you’d guess if you assume everything is ranked by quality alone. Social momentum is hard to model on a spreadsheet, so it’s hard to predict or think about in terms that seem rational. But it’s so powerful."

The standard economic model is that people act in their individual and group self-interest. But humans are much more complicated than that.


🎓 Academics Are Really, Really Worried About Their Freedom — "Some will process this as a kind of whining, supposing that all we should really be concerned about is whether people are outright dismissed. However, elsewhere a hostile work environment is considered a breach of civil rights, and as one correspondent wrote, “It isn’t just fear of firing that motivates professors and grad students to be quiet. It is a desire to have friends, to be part of a community. This is a fundamental part of human psychology. Indeed, experiments examining the effects of ostracism highlight what a powerful existential threat it is to be ignored, excluded, or rejected. This has been documented at the neurological level. Ostracism is a form of social death. It is a very potent threat.”

Given how conservative humanity has been for the past tens of thousands of years, and given how radical we need to be to fix the world, I don't have lots of sympathy with this view. Especially when tenured professors have the kind of job security most people can only dream of.


👩‍💻 Where we are with digital learning adoption — "We should have less big bang summative exams sat in big rooms with invigilators, there are plenty of alternatives. Online assessment systems can at least allow for typing, which is more authentic, and why not also speaking, and drawing? And in the scenarios where an unseen timed assessment is the only option and it has to be online: sometimes proctoring might be useful. It shouldn’t be the default. But it might have a place, sometimes."

I'm sharing this to +1,000,000 Amber's suggestion that, for assessment purposes, speaking and drawing should be as authentic as typing and writing.


Quotation-as-title by Marquis de la Grange. Image: Changing the Letter, 1908, by Joseph Edward Southall

Mediocrity is a hand-rail

Venus flyrap cyborg

🤖 Engineers Turned Living Venus Flytrap Into Cyborg Robotic Grabber — "The main purpose of this research was to find a way of creating robotic mechanisms able to pick up tiny, delicate objects without harming them. And this particular cyborg creation was able to do just that."

👀 First Look: Meet the New Linux Distro Inspired by the iPad — "This distro is designed to be a tablet first and a “laptop-lite” experience second. And I do mean “lite”; this is not trying to be a desktop Linux distro that runs tablet apps, but a tablet Linux distro that can run desktop ones – a distinction that’s worth keeping in mind."

🤯 DALL·E: Creating Images from Text — "GPT-3 showed that language can be used to instruct a large neural network to perform a variety of text generation tasks. Image GPT showed that the same type of neural network can also be used to generate images with high fidelity. We extend these findings to show that manipulating visual concepts through language is now within reach."

🔊 Surround sound from lightweight roll-to-roll printed loudspeaker paper — "The speaker track, including printed circuitry, weighs just 150 grams and consists of 90 percent conventional paper that can be printed in color on both sides."

👩‍💻 You can now run Linux on Apple M1 devices — "While Linux, and even Windows, were already usable on Apple Silicon thanks to virtualization, this is the first instance of a non-macOS operating system running natively on the hardware."


Quotation-as-title by Montesquieu. Image from top-linked post.

The certainties of one age are the problems of the next

Black-and-white photo of a man with beard emerging from shed

🏙️ How the spread of sheds threatens cities — "A white-collar worker who has tried to work from the kitchen table for the past nine months might be keen to return to the office. A worker who has an insulated garden shed with Wi-Fi will be less so. Joel Bird, who builds bespoke sheds, is certain that his clients envisage a long-term change in their working habits. “They don’t consider it to be temporary,” he says. “They’re spending too much money.”

😬 Transactional Enchantment — "The greatest endemic risk to the psyche in 2021 is not that you’ll end up on the streets next week or fail to fund your retirement in 30 years. The greatest risk is that you’ll feel so relentlessly battered by the weirdness all around that you’ll go numb and simply disengage from the world entirely today."

🕸️ The unreasonable effectiveness of simple HTML — "Are you developing public services? Or a system that people might access when they’re in desperate need of help? Plain HTML works. A small bit of simple CSS will make look decent. JavaScript is probably unnecessary – but can be used to progressively enhance stuff. Add alt text to images so people paying per MB can understand what the images are for (and, you know, accessibility)."

💬 Convocational Development — "The fundamental difference between the convocation and traditional open source is that energy is put into facilitating discussions between users, coders, graphic designers etc. Documentation and instructions are often the weakest part of an open source project, and that excludes people who don’t have the time or ability to assemble a mental model of the open source software and its capabilities from just the code and the meagre promotional materials. The convocation starts as a basic web forum, but evolves tools and cultures that enable greater participation in the development process itself."

📈 GameStop Is Rage Against the Financial Machine — "Instead of greed, this latest bout of speculation, and especially the extraordinary excitement at GameStop, has a different emotional driver: anger. The people investing today are driven by righteous anger, about generational injustice, about what they see as the corruption and unfairness of the way banks were bailed out in 2008 without having to pay legal penalties later, and about lacerating poverty and inequality. This makes it unlike any of the speculative rallies and crashes that have preceded it."


Quotation-as-title by R.H. Tawney. Image from top-linked post.

You don't hate Mondays, you hate capitalism

🧠 I Feel Better Now — "Brain chemistry and childhood trauma go a long way toward explaining a person’s particular struggles with mental health, but you could be forgiven for wondering whether there is also something larger at work here—whether the material arrangement of society itself, in other words, is contributing to a malaise that various authorities nevertheless encourage us to believe is exclusively individual."

😟 Where loneliness can lead — "Totalitarianism uses isolation to deprive people of human companionship, making action in the world impossible, while destroying the space of solitude. The iron-band of totalitarianism, as Arendt calls it, destroys man’s ability to move, to act, and to think, while turning each individual in his lonely isolation against all others, and himself. The world becomes a wilderness, where neither experience nor thinking are possible."

🙍 The problem is poverty, however we label it — "If your only choice of an evening is between skipping dinner or going to sleep in the cold before waking up in the cold, then you are not carefully selecting between food poverty and fuel poverty, like some expense-account diner havering over the French reds on a wine list. You are simply impoverished."

👩‍💻 Malware found on laptops given out by government — "According to the forum, the Windows laptops contained Gamarue.I, a worm identified by Microsoft in 2012... The malware in question installs spyware which can gather information about browsing habits, as well as harvest personal information such as banking details."

🏭 Turn off that camera during virtual meetings, environmental study says — "Just one hour of videoconferencing or streaming, for example, emits 150-1,000 grams of carbon dioxide... But leaving your camera off during a web call can reduce these footprints by 96%."


Quotation-as-title by unknown. Image via top-linked article.

Most don't talk or act according to who they are, but as they are obliged to

NASA image of stars

The World’s Oldest Story? Astronomers Say Global Myths About ‘Seven Sisters’ Stars May Reach Back 100,000 Years — "Why are the Australian Aboriginal stories so similar to the Greek ones? Anthropologists used to think Europeans might have brought the Greek story to Australia, where it was adapted by Aboriginal people for their own purposes. But the Aboriginal stories seem to be much, much older than European contact. And there was little contact between most Australian Aboriginal cultures and the rest of the world for at least 50,000 years. So why do they share the same stories?"

🚶‍♂️ The joy of steps: 20 ways to give purpose to your daily walk — "We need to gallivant around outside in daylight so that our circadian rhythms can regulate sleep and alertness. (Yes, even when the sky is resolutely leaden, it is still technically daylight.) Walking warms you up, too; when you get back indoors, it will feel positively tropical."

🔐 How Law Enforcement Gets Around Your Smartphone's Encryption — "Cryptographers at Johns Hopkins University used publicly available documentation from Apple and Google as well as their own analysis to assess the robustness of Android and iOS encryption. They also studied more than a decade's worth of reports about which of these mobile security features law enforcement and criminals have previously bypassed, or can currently, using special hacking tools."

🚫 Misinformation dropped dramatically the week after Twitter banned Trump and some allies — "The findings, from Jan. 9 through Friday, highlight how falsehoods flow across social media sites — reinforcing and amplifying each other — and offer an early indication of how concerted actions against misinformation can make a difference."

😲 The Ethics of Emotion in AI Systems (Research Summary) — "There will always be a gap between the emotions modelled and the experience of EAI systems. Addressing this gap also implies recognizing the implicit norms and values integrated into these systems in ways that cannot always be foreseen by the original designers. With EAI, it is not just a matter of deciding between the right emotional models and proxy variables, but what the responses collected signify in terms of human beings’ inner feelings, judgments, and future actions."


Quotation-as-title by Baltasar Gracián. Image from top-linked post.

The problem is that the person who should be the most restrained is the least

Turtle poking its head out of water covered with duckweed

🦆 Bionic Duckweed: making the future the enemy of the present — "In its broader sense, bionic duckweed can be thought of a sort of unobtainium that renders investment in present-day technologies pointless, unimaginative, and worst of all On The Wrong Side Of History. “Don’t invest in what can be done today, because once bionic duckweed is invented it’ll all be obsolete.” It is a sort of promissory note in reverse, forcing us into inaction today in the hope of wonders tomorrow."

🤔 The best tech of CES 2020: Where are they now? — "What looked like it was just a one-off at the largest tech tradeshow in the world, but actually turned out to be a real product? What got a lot of buzz and then dropped off our radars, only to resurface months later? And, of course, what was simply too good to be true?"

💬 If it will matter after today, stop talking about it in a chat room — "Rule of thumb: If a discussion will matter after today, don’t have it in a chat room. Check out Discourse, Twist, Carrot, Threads, Basecamp, Flarum, or heck even GitHub issues. These tools exist for a reason. They solve a real problem."

🔥 Sauron Has the Power of the One Ring for Another Week, What’s the Worst that Could Happen? — "Upon further reflection, we are not entirely sure the orcs and trolls who participated in this demonstration were indeed sent by Sauron. Yes, the Mouth of Sauron encouraged the pro-Evil horde into a “trial by combat.” Yes, the crowd was painted with Sauron’s Red Eye and chanted his name. But anyone can mix paint and yell. We have it on good rumor that there were hobbits mixed into the gathering and inciting violence. Granted, we started these rumors, but oftentimes rumors are true."

Working Off-Grid Efficiently — "For the first 3 years we tested the limits of our space, and at first, it was difficult to create new things, as we had to make time to learn how to solve the underlying problems. Our boat was not just an office, it was also our house and transport. As for us, we were artists, but also had to be plumbers, deckhands, electricians, captains, janitors and accountants."


Quotation-as-title by Baltasar Gracián. Image from top-linked post.

There are many things we despise in order that we may not have to despise ourselves

Chart showing Internet 1.0 ("Technology"), Internet 2.0 ("Economics") and Internet 3.0 (Politics). A u-shaped line indicates 1.0 and 3.0 as 'decentralised' and 2.0 as 'centralised'. Via Stratechery.

🇺🇸 Well, that was expected — "I’ve recorded this here since it feels like the chronology of events and the smaller details are already evaporating, and this helps me wrap my head around a tiny fraction of it. If you happen to read this, don’t take this at face value (nor anything else on the web for that matter). Do your own research and correct me if you think any of the timestamps are wrong."

📺 Fox News and the real insurrection — "After Democrats said they planned to impeach Trump again, Fox opinionators echoed the risible Republican talking point that such a move would be provocative; after Twitter banned Trump, they pivoted to bash Big Tech. Yesterday morning, Jeanine Pirro compared Amazon’s decision to boot Parler, an app popular among right-wing extremists, from its web-hosting services to Kristallnacht—the night, in 1938, when Nazis in Germany killed around one hundred Jewish people and arrested tens of thousands more"

Lost Passwords Lock Millionaires Out of Their Bitcoin Fortunes — "Of the existing 18.5 million Bitcoin, around 20 percent — currently worth around $140 billion — appear to be in lost or otherwise stranded wallets, according to the cryptocurrency data firm Chainalysis. Wallet Recovery Services, a business that helps find lost digital keys, said it had gotten 70 requests a day from people who wanted help recovering their riches, three times the number of a month ago."

🕸️ Pirated Academic Database Sci-Hub Is Now on the ‘Uncensorable Web’ — "As evidenced by Sci-Hub’s own problems, the decentralized web is being built out of fears of deplatforming. As the internet’s access points are increasingly centralized in the hands of a few actors, certain applications – most recently, Twitter-alternative Parler – have faced censorship at the hands of web server providers, app stores and DNS certificate authorities."

🏛️ Internet 3.0 and the Beginning of (Tech) History — Here technology itself will return to the forefront: if the priority for an increasing number of citizens, companies, and countries is to escape centralization, then the answer will not be competing centralized entities, but rather a return to open protocols.  This is the only way to match and perhaps surpass the R&D advantages enjoyed by centralized tech companies; open technologies can be worked on collectively, and forked individually, gaining both the benefits of scale and inevitability of sovereignty and self-determination.


Quotation-as-title by Vauvenargues. Image from bottom-linked post.

Nothing is repeated, and everything is unparalleled

🤔 We need more than deplatforming — "But as reprehensible as the actions of Donald Trump are, the rampant use of the internet to foment violence and hate, and reinforce white supremacy is about more than any one personality. Donald Trump is certainly not the first politician to exploit the architecture of the internet in this way, and he won’t be the last. We need solutions that don’t start after untold damage has been done."

💪 Demands and Responsibilities — "If you demand rights for yourself, you have to demand those same rights for others. You have to take on the responsibility of collective action, and you yourself act in a way that benefits the collective. If you want credit, you have to give credit. If you want community, you have to be communal. If you want to be satiated, you have to allow others to be sated. If you want your vote to be respected, you have to respect the votes of others."

🗯️ Parler Pitched Itself as Twitter Without Rules. Not Anymore, Apple and Google Said. — "Google said in a statement that it had pulled the app because Parler was not enforcing its own moderation policies, despite a recent reminder from Google, and because of continued posts on the app that sought to incite violence."

🙅 Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act — "While this may all feel kind of mean, it's not meant to be. Unless you're one of the people who is purposefully saying wrong things about Section 230, like Senator Ted Cruz or Rep. Nancy Pelosi (being wrong about 230 is bipartisan). For them, it's meant to be mean. For you, let's just assume you made an honest mistake -- perhaps because deliberately wrong people like Ted Cruz and Nancy Pelosi steered you wrong. So let's correct that."

🧐 What Wikipedia saw during election week in the U.S., and what we’re doing next — "To help meet this goal, we hope to invest in resources that we can share with international Wikipedia communities that will help mitigate future disinformation risks on the sites. We’re also looking to bring together administrators from different language Wikipedias for a global forum on disinformation. Together, we aim to build more tools to support our volunteer editors, and to combat disinformation."


Quotation-as-title by the Goncourt Brothers. Image from top-linked post.

There are persons who, when they cease to shock us, cease to interest us

Donald Trump's head on Gladiator's body with text "How Trump sees himself - 'Are you not entertained?'"

It's difficult not to say "I told you so" when things play out exactly as predicted. Four years ago, when Donald Trump was sworn in as the 45th President of the USA, many had ominous forebodings.

Donald Trump’s inaugural address was a declaration of war on everything represented by these choreographed civilities. President Trump – it’s time to begin to get used to those jarringly ill-fitting words – did not conjure a deathless phrase for the day. His words will not lodge in the brain in any of the various uplifting ways that the likes of Lincoln, Roosevelt, Kennedy or Reagan once achieved. But the new president’s message could not have been clearer. He came to shatter the veneer of unity and continuity represented by the peaceful handover. And he may have succeeded. In 1933, Roosevelt challenged the world to overcome fear. In 2017, Mr Trump told the world to be very afraid.

The Guardian view on Donald Trump’s inauguration: a declaration of political war (January 2017)

He was all bluster, we were told. That it was rhetoric and would never be followed up with action.

Leaders are judged by their first 100 days in office. Wikipedia has a page outlining what Trump did during his, including things that, looking back from the vantage point of 2021, seem like warning shots: rolling back gun control legislation, stoking fears around voter fraud, cracking down on illegal immigration, freezing federal job hiring (except military), and engaging in tax reform to the benefit of the rich.


As a History teacher, it always struck me as odd that Adolf Hitler, a man born in Austria with brown hair, managed to lead a fascist party that extolled the virtues of being German and having blond hair. These days, I'm equally baffled that some of the richest people in our society — Donald Trump, Nigel Farage, Jacob Rees-Mogg — can pass themselves off as 'anti-elite'.

Much of their ability to do so is by creating an alternative reality with the aid of social networks like Facebook, Twitter, and YouTube. These replace traditional gatekeepers to information with algorithms tweaked for engagement, attention, and profit.

As we know, whipping up hatred and peddling conspiracy theories puts these algorithms into overdrive, and ensure those who agree with the content see what's shared. But this approach also reaches those who don't agree with it, by virtue of people seeking to reject and push back on it. Meanwhile, of course, the platforms rake in $$$ from advertisers.


I get the feeling that there are a great number of people who do not understand the way the world works in 2021. I am probably one of them. In fact, given how much control we've given to algorithms in recent years, perhaps no-one truly understands.

One thing for sure, though, is that banning Donald Trump from Facebook and Instagram indefinitely is too little, too late. These platforms, among with others, downplayed his and other 'alt-right' hate speech for fear of being penalised.

Pandora's Box is open. Those who realise that everything is a construct and theory-laden will control those who don't. The latter will be reduced to merely wandering around an alternative reality, like protesters in Statuary Hall, waiting to be told what to do next.


Quotation-as-title by F.H. Bradley

One can acquire anything in solitude except character

https://www.youtube.com/watch?v=OT40Rmjwd-Q
How to Be at Home (2020)

🌐 The Metaverse is coming — "Over the course of 2021, the Metaverse will experience widespread use, and start to become a human co-experience utility. People will meet in virtual worlds not just to play a game, but also to check out a new movie trailer or laugh at user-generated videos. Education will move from learning to code online to learning core sciences with physics or biology simulations and ultimately becoming an immersive environment where classrooms are organised within it."

🐠 Hallucinogenic fish — "A few reporters have eaten the dream fish and described their strange effects. The most famous user is Joe Roberts, a photographer for the National Geographic magazine. He broiled the dream fish in 1960. After eating the delicacy, he experienced intense hallucinations with a science-fiction theme that included futuristic vehicles, images of space exploration, and monuments marking humanity's first trips into space."

Hundreds of Google Employees Unionize, Culminating Years of Activism — "The union’s creation is highly unusual for the tech industry, which has long resisted efforts to organize its largely white-collar work force. It follows increasing demands by employees at Google for policy overhauls on pay, harassment and ethics, and is likely to escalate tensions with top leadership."

🍌 The Banana Trick and Other Acts of Self-Checkout Thievery — "Perhaps it’s not surprising that some people steal from machines more readily than from human cashiers. “Anyone who pays for more than half of their stuff in self checkout is a total moron,” reads one of the more militant comments in a Reddit discussion on the subject."


Quotation-as-title by Stendhal.

Seeing through is rarely seeing into

On New Year's Eve, Farmville shut down. Unlike everyone else who seemed to play the game a decade ago, I never experienced it. Why? Mercifully, I wasn't on Facebook.

An article in The New York Times argues that Farmville, and other, similar, games made by Zynga, paved the way for the kind of 'social' experiences we have seen in the last decade. That is to say, mass behaviour modification disguised as a game.

Mia Consalvo, a professor in game studies and design at Concordia University in Canada, was among those who saw FarmVille constantly in front of her.

“When you log into Facebook, it’s like, ‘Oh, 12 of my friends need help,’” she said.

She questioned how social the game actually was, arguing that it didn’t create deep or sustained interactions.

“The game itself isn’t promoting a conversation between you and your friends, or encouraging you to spend time together within the game space,” she said. “It’s really just a mechanic of clicking a button.”

FarmVille Once Took Over Facebook. Now Everything Is FarmVille. (The New York Times)

It's hardly surprising, then, that conspiracy theories have now become Alternative Reality Games (ARGs) or Live Action Role Playing Games (LARPs) where claims can never be falsified.

You may have heard of QAnon, the batshit-crazy conspiracy theory. As one game designer points out, it's so effective, despite it being anti-rational, because of the incredible amounts of apophenia ("tendency to perceive meaningful connections between unrelated things") it entails.

QAnon has often been compared to ARGs and LARPs and rightly so. It uses many of the same gaming mechanisms and rewards. It has a game-like feel to it that is evident to anyone who has ever played an ARG, online role-play (RP) or LARP before. The similarities are so striking that it has often been referred to as a LARP or ARG. However this beast is very very different from a game.

[...]

QAnon grows on the wild misinterpretation of random data, presented in a suggestive fashion in a milieu designed to help the users come to the intended misunderstanding. Maybe “guided apophenia” is a better phrase. Guided because the puppet masters are directly involved in hinting about the desired conclusions. They have pre-seeded the conclusions. They are constantly getting the player lost by pointing out unrelated random events and creating a meaning for them that fits the propaganda message Q is delivering.

A Game Designer’s Analysis Of QAnon (Curioser Institute)

Ironically enough, the arc of my career, and many other knowledge workers like me, is to spot connections between similarly unrelated things.

As Dorian Taylor points out in his newsletter, there is a lot of money to be made as the 'trusted intermediary' between people and the information they desire.

The role of the intermediary is, nominally, to act as a trusted source, conduit, or steward of shared informational state. Being the trusted steward of shared informational state is functionally the same as owning it. Platform operators understand this in their bones, which is why they make their fiefdoms easy to join and hard to quit. And they do that by making the information you put into them hard to pry back out.

Setting the Tone for an Anti-Platform
(the making of Making Sense)

Taylor is talking mainly about platforms and standards, but the point remains that intermediaries only remain trusted so long as what they say is either objectively true (i.e. is 'falsifiable') or they can keep spinning the lies long enough.

In early 2021, we live in a world of what has become known as 'fake news' or 'alternative facts'. As Caleb James DeLisle recently pointed out in an epic New Year's Eve thread, however, is that there's another way of understanding this as being a move away from what he calls 'consensus reality'.

There are obviously facts which are beyond question: no matter how much you believe, jumping from a tall building will not make you fly. But social constructions accepted as truth are far more pervasive than most people think.

2020 is finally coming to a close, and like many people you probably cannot wait for this cursed year to be over. But did you stop to think that January 1st is only the boundary between years because Julius Caesar decreed it so? Social constructs are pervasive.

Caleb James DeLisle (Mastodon)

People having different ways of understanding the world is the default way that tribes of humans work. The scientific method, an agreement on objective facts, is a relatively new invention.

Since 2005, the hugely lucrative game that Big Tech has got us to play is adtech: behavioural modification through invasive advertising that tracks your every move. Now, though, we're all at it, trying to modify one another's behaviour to get the external world to adhere to the internal one we've created.


Quotation-as-title from Elizabeth Bransco. Image by Mari Helin.

Better to write for yourself and have no public, than to write for the public and have no self

📚 Bookshelf designs as unique as you are: Part 2 — "Stuffing all your favorite novels into a single space without damaging any of them, and making sure the whole affair looks presentable as well? Now, that’s a tough task. So, we’ve rounded up some super cool, functional and not to mention aesthetically pleasing bookshelf designs for you to store your paperback companions in!"

📱 How to overcome Phone Addiction [Solutions + Research] — "Phone addiction goes hand in hand with anxiety and that anxiety often lowers the motivation to engage with people in real life. This is a huge problem because re-connecting with people in the offline world is a solution that improves the quality of life. The unnecessary drop in motivation because of addiction makes it that much harder to maintain social health."

⚙️ From Tech Critique to Ways of Living — "This technological enframing of human life, says Heidegger, first “endanger[s] man in his relationship to himself and to everything that is” and then, beyond that, “banishes” us from our home. And that is a great, great peril."

🎨 Finding time for creativity will give you respite from worries — "According to one study examining the links between art and health, a cost-benefit analysis showed a 37% drop in GP consultation rates and a 27% reduction in hospital admissions when patients were involved in creative pursuits. Other studies have found similar results. For example, when people were asked to write about a trauma for 15 minutes a day, it resulted in fewer subsequent visits to the doctor, compared to a control group."

🧑‍🤝‍🧑 For psychologists, the pandemic has shown people’s capacity for cooperation — "In short, what we have seen is a psychology of collective resilience supplanting a psychology of individual frailty. Such a shift has profound implications for the relationship between the citizen and the state. For the role of the state becomes less a matter of substituting for the deficiencies of the individual and more to do with scaffolding and supporting communal self-organisation."


Quotation-as-title by Cyril Connolly. Image from top-linked post.

You can never get rid of what is part of you, even when you throw it away

🤖 Why the Dancing Robots Are a Really, Really Big Problem — "No, robots don’t dance: they carry out the very precise movements that their — exceedingly clever — programmers design to move in a way that humans will perceive as dancing. It is a simulacrum, a trompe l’oeil, a conjurer’s trick. And it works not because of something inherent in the machinery, but because of something inherent in ours: our ever-present capacity for finding the familiar. It looks like human dancing, except it’s an utterly meaningless act, stripped of any social, cultural, historical, or religious context, and carried out as a humblebrag show of technological might."

💭 Why Do We Dream? A New Theory on How It Protects Our Brains — "We suggest that the brain preserves the territory of the visual cortex by keeping it active at night. In our “defensive activation theory,” dream sleep exists to keep neurons in the visual cortex active, thereby combating a takeover by the neighboring senses."

A simple 2 x 2 for choices — "It might be simple, but it’s not always easy. Success doesn’t always mean money, it just means that you got what you were hoping for. And while every project fits into one of the four quadrants, there’s no right answer for any given person or any given moment.."

📅 Four-day week means 'I don't waste holidays on chores' — "The four-day working week with no reduction in pay is good for the economy, good for workers and good for the environment. It's an idea whose time has come."

💡 100 Tips For A Better Life — "It is cheap for people to talk about their values, goals, rules, and lifestyle. When people’s actions contradict their talk, pay attention!"


Quotation-as-title from Goethe. Image from top-linked post.

You should aim to be independent of any one vote, of any one fashion, of any one century

Happy New Year!

Vintage photograph of an old man building a model ship with a young boy

⚒️ That which is unique, breaks — "The more finished goods become commodities, the fewer opportunities an individual has to generate new creation. The ability to mass-produce removes the opportunity for the great many to learn to produce at all. From such a thought, a future full of consumption-only hobbies might come as no surprise."

🚔 New Orleans City Council bans facial recognition, predictive policing and other surveillance tech — "The ordinance as passed puts outright bans on four pieces of technology — facial recognition, characteristic recognition and tracking software, predictive policing and cell-site simulators. A ban on license plate readers in the original ordinance was ultimately scrapped."

🎭 The ‘Batman Effect’: How having an alter ego empowers you — "Self-distancing seems to enable people to reap these positive effects by leading them to focus on the bigger picture – it’s possible to see events as part of a broader plan rather than getting bogged down in immediate feelings. And this has led some researchers to wonder whether it could also improve elements of self-control like determination, by making sure that we keep focused on our goals even in the face of distraction."

🦇 New lessons for stealth technology — "Optical metamaterials that refract and scatter light in adaptive ways are already familiar in the living world, for example in the photonic crystals found on strongly coloured, microstructured insect cuticles or butterfly wings. Now it appears that acoustic stealth technology too was discovered first by natural selection. Neil et al. report evidence that the intricate array of scales on some moth wings acts as an acoustic metamaterial to reduce echoes from ultrasound6. This, they say, is probably an adaptive property that reduces the visibility of moths to the sonar searches of their predators, bats.

🥱 Misinformation fatigue sets in — "It turns out maybe people don’t actually care about being lied to. And little is likely to change in 2021 unless and until platforms take actual responsibility for the way people gather and organize on them — admitting that their algorithms already guide what we see, who we speak to, what we buy, and what we believe, and working with outside experts to instead curate an experience that undoes a bit of the pollution that they’ve made."


Quotation-as-title from Baltasar Gracián. Image from top-linked post.

See you in 2021!

Thought Shrapnel is now on its usual December hiatus, so see you next year for more links and thoughts on the intersection of technology and society!

Doug

A world without apps?

Steve Jobs standing next to a huge screen that shows the original iPhone. The words on the screen read "Your life in your pocket. The ultimate digital device."

When Steve Jobs demonstrated the iPhone in 2007, he didn't show off the App Store. That's because it didn't exist.

The full Safari engine is inside of iPhone. And so, you can write amazing Web 2.0 and Ajax apps that look exactly and behave exactly like apps on the iPhone. And these apps can integrate perfectly with iPhone services. They can make a call, they can send an email, they can look up a location on Google Maps.

Steve Jobs

Jobs' vision was for a world where web apps worked as well as native apps. Unfortunately, at the time, web technologies weren't quite ready for his vision, so, almost as a temporary workaround, Apple invented a billion-dollar industry.

Writing in The New York Times, Shira Ovide reflects on the recent controversy around Epic Games and Apple, among other things, and wonders whether we actually need apps?

Apple and Google dictate much of what is allowed on the world’s phones. There are good outcomes from this, including those companies weeding out bad or dangerous apps and giving us one place to find them.

But this comes with unhappy side effects. Apple and Google charge a significant fee on many in-app purchases, and they’ve forced app makers into awkward workarounds. (Ever try to buy a Kindle e-book on an iPhone app? You can’t.) The growing complaints from app makers show that the downsides of app control may be starting to outweigh the benefits.

You know what’s free from Apple and Google’s iron grip? The web. Smartphones could lean on the web instead.

Shira Ovide, Imagine a World Without Apps (The new York Times)

It's almost impossible for a small developer to get discovered in the Apple and Google app stores these days. As VentureBeat put it three years ago, "you have a better chance of making the NBA than making your app viral."

Progressive Web Apps, or PWAs, make an alternative, web-centric world a reality. When Google launched its gaming service, Stadia, on iOS, it used a PWA to bypass the Apple App Store.

Screenshots showing Pinterest PWA being installed on a smartphone.
Image via Tigren

Organisations from Twitter and Tinder to the Financial Times have PWAs. Pinterest used it to increase the number of people installing their app by 45%.

This is about imagining an alternate reality where companies don’t need to devote money to creating apps that are tailored to iPhones and Android phones, can’t work on any other devices and obligate app makers to hand over a cut of each sale.

Maybe more smaller digital companies could thrive. Maybe our digital services would be cheaper and better. Maybe we’d have more than two dominant smartphone systems. Or maybe it would be terrible. We don’t know because we’ve mostly lived with unquestioned smartphone app dominance.

Shira Ovide, Imagine a World Without Apps (The new York Times)

Initiatives such as Mozilla's Firefox OS were cursed with being too early to the market. Had they kept going, or if it were launching now, I think we'd see very different adoption rates.

As it is, and as Todd Weaver, CEO of Purism points out, it's going to require a combination of both market dynamics and regulation to fix the current situation. Let's get back to that original vision of the web as the platform for human flourishing.

He that overvalues himself will undervalue others, and he that undervalues others will oppress them

🎺 What Time Feels Like When You’re Improvising — "A great example of flow state is found in many improvised art forms, from music to acting to comedy to poetry, also known as “spontaneous creativity.” Improvisation is a highly complex form of creative behavior that justly inspires our awe and admiration. The ability to improvise requires cognitive flexibility, divergent thinking and discipline-specific skills, and it improves with training."

💼 SEC proposes rules for giving gig workers equity — "The five-year pilot program would allow gig companies to issue equity as long as it's no more than 15% of a worker's compensation during a 12-month period, and no more than $75,000 in value during a 36-month period (based on the share price when it's issued)."

🧠 Your Brain Is Not for Thinking — "Your brain’s most important job isn’t thinking; it’s running the systems of your body to keep you alive and well. According to recent findings in neuroscience, even when your brain does produce conscious thoughts and feelings, they are more in service to the needs of managing your body than you realize."

Social Unrest Is the Inevitable Legacy of the Covid Pandemic — "Like turpentine on flames, Covid-19 has rekindled older divisions, resentments and inequities across the world. In the U.S., Black Americans suffer disproportionately from police brutality, but also from the coronavirus — now these traumas merge. And everywhere, the poor fare worse than the rich."

👣 A new love for medieval-style travel — "We might today think of pilgrimage as a specifically religious form of travel. But even in the past, the sightseeing was as important as the spirituality. Dr Marion Turner, a scholar at Oxford University who studies Geoffrey Chaucer, points out that “it was a time away from ordinary society, and allowed for a time of play.”


Quotation-as-title by Dr Johnson. Image via xkcd.

What kind of world do we want? (or, why regulation matters)

I saw a thread on Mastodon recently, which included this image:

Three images with the title 'Space required to Transport 48 People'. Each image is the same, with cars backed up down a road. The caption for each image is 'Car', 'Electric Car' and 'Autonomous Car', respectively.

Someone else replied with a meme showing a series of images with the phrase "They feed us poison / so we buy their 'cures' / while they ban our medicine". The poison in this case being cars burning fossil fuels, the cures being electric and/or autonomous cars, and the medicine public transport.

There's similar kind of thinking in the world of tech, with at least one interviewee in the documentary The Social Dilemma saying that people should be paid for their data. I've always been uneasy about this, so it's good to see the EFF come out strongly against it:

Let’s be clear: getting paid for your data—probably no more than a handful of dollars at most—isn’t going to fix what’s wrong with privacy today. Yes, a data dividend may sound at first blush like a way to get some extra money and stick it to tech companies. But that line of thinking is misguided, and falls apart quickly when applied to the reality of privacy today. In truth, the data dividend scheme hurts consumers, benefits companies, and frames privacy as a commodity rather than a right.

EFF strongly opposes data dividends and policies that lay the groundwork for people to think of the monetary value of their data rather than view it as a fundamental right. You wouldn’t place a price tag on your freedom to speak. We shouldn’t place one on our privacy, either.

Hayley Tsukayama, Why Getting Paid for Your Data Is a Bad Deal (EFF)

As the EFF points out, who would get to set the price of that data, anyway? Also, individual data is useful to companies, but so is data in aggregate. Is that covered by such plans?

Facebook makes around $7 per user, per quarter. Even if they gave you all of that, is that a fair exchange?

Those small checks in exchange for intimate details about you are not a fairer trade than we have now. The companies would still have nearly unlimited power to do what they want with your data. That would be a bargain for the companies, who could then wipe their hands of concerns about privacy. But it would leave users in the lurch.

All that adds up to a stark conclusion: if where we’ve been is any indication of where we’re going, there won’t be much benefit from a data dividend. What we really need is stronger privacy laws to protect how businesses process our data—which we can, and should do, as a separate and more protective measure.

Hayley Tsukayama, Why Getting Paid for Your Data Is a Bad Deal (EFF)

As the rest of the article goes on to explain, we're already in a world of 'pay for privacy' which is exacerbating the gulf between the haves and the have-nots. We need regulation and legislation to curb this before it gallops away from us.

A candour affected is a dagger concealed

Slowly-boiling frogs in Facebook's surveillance panopticon

I can't think of a worse company than Facebook than to be creating a IRL surveillance panopticon. But, I have to say, it's entirely on-brand.

On Wednesday, the company announced a plan to map the entire world, beyond street view. The company is launching a set of glasses that contains cameras, microphones, and other sensors to build a constantly updating map of the world in an effort called Project Aria. That map will include the inside of buildings and homes and all the objects inside of them. It’s Google Street View, but for your entire life.

Dave Gershgorn, Facebook’s Project Aria Is Google Maps — For Your Entire Life (OneZero)

We're like slowly-boiling frogs with this stuff. Everything seems fine. Until it's not.

The company insists any faces and license plates captured by Aria glasses wearers will be anonymized. But that won’t protect the data from Facebook itself. Ostensibly, Facebook will possess a live map of your home, pictures of your loved ones, pictures of any sensitive documents or communications you might be looking at with the glasses on, passwords — literally your entire life. The employees and contractors who have agreed to wear the research glasses are already trusting the company with this data.

Dave Gershgorn, Facebook’s Project Aria Is Google Maps — For Your Entire Life (OneZero)

With Amazon cosying up to police departments in the US with its Ring cameras, we really are hurtling towards surveillance states in the West.

Who has access to see the data from this live 3D map, and what, precisely, constitutes private versus public data? And who makes that determination? Faces might be blurred, but people can be easily identified without their faces. What happens if law enforcement wants to subpoena a day’s worth of Facebook’s LiveMap? Might Facebook ever build a feature to try to, say, automatically detect domestic violence, and if so, what would it do if it detected it?

Dave Gershgorn, Facebook’s Project Aria Is Google Maps — For Your Entire Life (OneZero)

Judges already requisition Fitbit data to solve crimes. No matter what Facebook say are their intentions around Project Aria, this data will end up in the hands of law enforcement, too.


More details on Project Aria:

To pursue the unattainable is insanity, yet the thoughtless can never refrain from doing so

'Prepper' philosophy

This morning, I came across a long web page from 2016, presumably created as a reaction to everything that went down that year (little did we know!)

Ostensibly, it's about preparing for scenarios in life that are relatively likely. It's pretty epic. While I've converted it to PDF and printed all 68 pages out to read in more detail, there were some parts that jumped out at me, which I'll share here.

[T]he purpose of this guide is to combat the mindset of learned helplessness by promoting simple, level-headed, personal preparedness techniques that are easy to implement, don't cost much, and will probably help you cope with whatever life throws your way.

lcamtuf, Doomsday Prepping For Less Crazy Folk

Growing up, my mother was the kind of woman who always had extra tins in the cupboards 'just in case'. Recently, my wife has taken this to the next level, with documents containing details on our stash including best before dates, etc.

Effective preparedness can be simple, but it has to be rooted in an honest and systematic review of the risks you are likely to face. Plenty of excited newcomers begin by shopping for ballistic vests and night vision goggles; they would be better served by grabbing a fire extinguisher, some bottled water, and then putting the rest of their money in a rainy-day fund.

LCAMTUF, DOOMSDAY PREPPING FOR LESS CRAZY FOLK

I see this document, which goes into money, self-defence, hygiene, and even relationships as neighbours as more of a philosophy of life.

Rational prepping is meant to give you confidence to go about your business, knowing that you are well-equipped to weather out adversities. But it should not be about convincing yourself that the collapse is just around the corner, and letting that thought consume and disrupt your life.

Stay positive: the world is probably not ending, and there is a good chance that it will be an even better place for our children than it is for us. But the universe is a harsh mistress, and there is only so much faith we should be putting in good fortune, in benevolent governments, or in the wonders of modern technology. So, always have a backup plan.

LCAMTUF, DOOMSDAY PREPPING FOR LESS CRAZY FOLK

Recommended reading 👍

(also check out the author's hyperinflation gallery)

Much will have more

Philosophical anxiety as a superpower

Anxiety is a funny thing. Some people are anxious over specific things, while others, like me, have a kind of general background anxiety. It's only recently have I've admitted that to myself.

Some might call this existential or philosophical anxiety and, to a greater or lesser extent, it's part of the human condition.

Humans are philosophising animals precisely because we are the anxious animal: not a creature of the present, but regretful about the past and fearful of the future. We philosophise to understand our past, to make our future more comprehensible... Philosophy is the path that we hope gets us there. Anxiety is our dogged, unpleasant and indispensable companion.

Samir Chopra, Anxiety isn’t a pathology. It drives us to push back the unknown (Psyche)

One of the things my therapist has been pushing me on recently is my tolerance for, and ability to sit with uncertainty. We all want to know something for definite, but it's rarely possible.

We are anxious; we seek relief by enquiring, by asking questions, while not knowing the answers; greater or lesser anxieties might heave into view as a result. As we realise the dimensions of our ultimate concerns, we find our anxiety is irreducible, for our increasing bounties of knowledge – scientific, technical or conceptual – merely bring us greater burdens of uncertainty.

Samir Chopra, Anxiety isn’t a pathology. It drives us to push back the unknown (Psyche)

To be able to tolerate the philosophical anxiety of not knowing, then, is a form of superpower. It may not necessarily make us happy, but it does make us free.

Anxiety then, rather than being a pathology, is an essential human disposition that leads us to enquire into the great, unsolvable mysteries that confront us; to philosophise is to acknowledge a crucial and animating anxiety that drives enquiry onward. The philosophical temperament is a curious and melancholic one, aware of the incompleteness of human knowledge, and the incapacities that constrain our actions and resultant happiness.

Samir Chopra, Anxiety isn’t a pathology. It drives us to push back the unknown (Psyche)

Ultimately, it's OK to be anxious, as it makes us human and takes us beyond mere rationality to a deeper, more powerful understanding of who (and why) we are.

The most fundamental enquiry of all is into our selves; anxiety is the key to this sacred inner chamber, revealing which existential problematic – the ultimate concerns of death, meaning, isolation, freedom – we are most eager to resolve.

Samir Chopra, Anxiety isn’t a pathology. It drives us to push back the unknown (Psyche)

You can’t tech your way out of problems the tech didn’t create

The Electronic Frontier Foundation (EFF), is a US-based non-profit that exists to defend civil liberties in the digital world. They've been around for 30 years, and I support them financially on a monthly basis.

In this article by Corynne McSherry, EFF's Legal Director, she outlines the futility in attempts by 'Big Social' to do content moderation at scale:

[C]ontent moderation is a fundamentally broken system. It is inconsistent and confusing, and as layer upon layer of policy is added to a system that employs both human moderators and automated technologies, it is increasingly error-prone. Even well-meaning efforts to control misinformation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression.

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

Ultimately, these monolithic social networks have a problem around false positives. It's in their interests to be over-zealous, as they're increasingly under the watchful eye of regulators and governments.

We have been watching closely as Facebook, YouTube, and Twitter, while disclaiming any interest in being “the arbiters of truth,” have all adjusted their policies over the past several months to try arbitrate lies—or at least flag them. And we’re worried, especially when we look abroad. Already this year, an attempt by Facebook to counter election misinformation targeting Tunisia, Togo, Côte d’Ivoire, and seven other African countries resulted in the accidental removal of accounts belonging to dozens of Tunisian journalists and activists, some of whom had used the platform during the country’s 2011 revolution. While some of those users’ accounts were restored, others—mostly belonging to artists—were not.

Corynne McSherry, Content Moderation and the U.S. Election: What to Ask, What to Demand (EFF)

McSherry's analysis is spot-on: it's the algorithms that are a problem here. Social networks employ these algorithms because of their size and structure, and because of the cost of human-based content moderation. After all, these are companies with shareholders.

Algorithms used by Facebook’s Newsfeed or Twitter’s timeline make decisions about which news items, ads, and user-generated content to promote and which to hide. That kind of curation can play an amplifying role for some types of incendiary content, despite the efforts of platforms like Facebook to tweak their algorithms to “disincentivize” or “downrank” it. Features designed to help people find content they’ll like can too easily funnel them into a rabbit hole of disinformation.

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

She includes useful questions for social networks to answer about content moderation:

  • Is the approach narrowly tailored or a categorical ban?
  • Does it empower users?
  • Is it transparent?
  • Is the policy consistent with human rights principles?

But, ultimately...

You can’t tech your way out of problems the tech didn’t create. And even where content moderation has a role to play, history tells us to be wary. Content moderation at scale is impossible to do perfectly, and nearly impossible to do well, even under the most transparent, sensible, and fair conditions

CORYNNE MCSHERRY, CONTENT MODERATION AND THE U.S. ELECTION: WHAT TO ASK, WHAT TO DEMAND (EFF)

I'm so pleased that I don't use Facebook products, and that I only use Twitter these days as a place to publish links to my writing.

Instead, I'm much happier on the Fediverse, a place where if you don't like the content moderation approach of the instance you're on, you can take your digital knapsack and decide to call another place home. You can find me here (for now!).

Even those of a harsh and unyielding nature will endure gentle treatment: no creature is fierce and frightening if it is stroked

When people are free to do as they please, they usually imitate each other

Ethical living

Update: AI upscaled to larger resolution with more clarity

Mural which reads "You are personally responsible for becoming more ethical than the society you grew up in"

/via LinkedIn

Reafferent loops

In Peter Godfrey-Smith's book Other Minds, he cites work from 1950 by the German physiologists Erich van Holst and Horst Mittelstaedt.

They used the term afference to refer to everything you take in through the senses. Some of what comes in is due to the changes in the objects around you — that is exafference... — and some of what comes in is due to your own actions: that is reafference.

Peter Godfrey-Smith, Other Minds, p.154

Godfrey-Smith is talking about octopuses and other cephalopods, but I think what he's discussing is interesting from a digital note-taking point of view.

To write a note and read it is to create a reafferent loop. Rather than wanting to perceive only the things that are not due to you — finding the exafferent among the noise is the senses — you what you read to be entirely due to your previous action. You want the contents of the note to be due to your acts rather than someone else's meddling, or the natural decay of the notepad. You want the loop between present action and future perception to be firm. Thus enables your to create a form of external memory — as was, almost certainly, the role of much early writing (which is full of records of goods and transactions), and perhaps also the role of some early pictures, though that js much less clear.

When a written message is directed at others, it's ordinary communication. When you write something for yourself to read, there's usually an essential role for time — the goal is memory, in a broad sense. But memory like this is a communicative phenomenon; it is communication between your present self and a future self. Diaries and notes-to-self are embedded in a sender/receiver system just like more standard forms of communication.

Peter Godfrey-Smith, Other Minds, p.154-155

Some people talk about digital note-taking as a form of 'second brain'. Given the type of distributed cognition that Godfrey-Smith highlights in Other Minds, it would appear that by creating reafferent loops that's exactly the kind of thing that's happening.

Very interesting.

Hiring is broken, but not in the ways you assume

Hacker News is a link aggregator for people who work in tech. There's a lot of very technical information on there, but also stuff interesting to the curious mind more generally.

As so many people visit the site every day, it can be very influential, especially given the threaded discussion about shared links.

There can be a bit of a 'hive mind' sometimes, with certain things being sacred cows or implicit assumptions held by those who post (and lurk) there.

In this blog post focusing on hiring practices there's a critique of four 'myths' that seem to be prevalent in Hacker News discussions. Some of it is almost exclusively focused on tech roles in Silicon Valley, but I wanted to pull out this nugget which outlines what is really wrong with hiring:

Diversity. We really, really suck at diversity. We’re getting better, but we have a long way to go. Most of the industry chases the same candidates and assesses them in the same way.

Generally unfair practices. In cases where companies have power and candidates don’t, things can get really unfair. Lack of diversity is just one side-effect of this, others include poor candidate experiences, unfair compensation, and many others.

Short-termism. Recruiters and hiring managers that just want to fill a role at any cost, without thinking about whether there really is a fit or not. Many recruiters work on contingency, and most of them suck. The really good ones are awesome, but most of the well is poison. Hiring managers can be the same, too, when they’re under pressure to hire.

General ineptitude. Sometimes companies don’t knowing what they’re looking for, or are not internally aligned on it. Sometimes they just have broken processes, where they can’t keep track of who they’re talking to and what stage they’re at. Sometimes the engineers doing the interviews couldn’t care two shits about the interview or the company they work at. And often, companies are just tremendously indecisive, which makes them really slow to decide, or to just reject candidates because they can’t make up their minds.

Ozzie, 4 Hiring Myths Common in HackerNews Discussions

I've hired people and, even with the lastest talent management workflow software, it's not easy. It sucks up your time, and anything/everything you do can and will be criticised.

But that doesn't mean that we can't strive to make the whole process better, more equitable, and more enjoyable for all involved.

If you have been put in your place long enough, you begin to act like the place

Why we can't have nice things

There's a phrase, mostly used by Americans, in relation to something bad happening: "this is why we can't have nice things".

I'd suggest that the reason things go south is usually because people don't care enough to fix, maintain, or otherwise care for them. That goes for everything from your garden, to a giant wiki-based encyclopedia that is used as the go-to place to check facts online.

The challenge for Wikipedia in 2020 is to maintain its status as one of the last objective places on the internet, and emerge from the insanity of a pandemic and a polarizing election without being twisted into yet another tool for misinformation. Or, to put it bluntly, Wikipedia must not end up like the great, negligent social networks who barely resist as their platforms are put to nefarious uses.

Noam Cohen, Wikipedia's Plan to Resist Election Day Misinformation (WIRED)

Wikipedia's approach is based on a evolving process, one that is the opposite of "go fast and break things".

Moving slowly has been a Wikipedia super-power. By boringly adhering to rules of fairness and sourcing, and often slowly deliberating over knotty questions of accuracy and fairness, the resource has become less interesting to those bent on campaigns of misinformation with immediate payoffs.

Noam Cohen, Wikipedia's Plan to Resist Election Day Misinformation (WIRED)

I'm in danger of sounding old, and even worse, old-fashioned, but everything isn't about entertainment. Someone or something has to be the keeper of the flame.

Being a stickler for accuracy is a drag. It requires making enemies and pushing aside people or institutions who don’t act in good faith.

Noam Cohen, Wikipedia's Plan to Resist Election Day Misinformation (WIRED)

Collaboration is our default operating system

One of the reasons I'm not active on Twitter any more is the endless, pointless arguments between progressives and traditionalists, between those on the left of politics and those on the right, and between those who think that watching reality TV is an acceptable thing to spend your life doing, and those who don't.

Interestingly a new report which draws on data from 10,000 people, focus groups, and academic interviews suggests that half of the controversy on Twitter is generated by a small proportion of users:

[The report] states that 12% of voters accounted for 50% of all social-media and Twitter users – and are six times as active on social media as are other sections of the population. The two “tribes” most oriented towards politics, labelled “progressive activists” and “backbone Conservatives”, were least likely to agree with the need for compromise. However, two-thirds of respondents who identify with either the centre, centre-left or centre-right strongly prefer compromise over conflict, by a margin of three to one.

Michael Savage, ‘Culture wars’ are fought by tiny minority – UK study (The Observer)

Interestingly, the report also shows difference between the US and UK, but also to attitudes before and after the pandemic started:

The research also suggested that the Covid-19 crisis had prompted an outburst of social solidarity. In February, 70% of voters agreed that “it’s everyone for themselves”, with 30% agreeing that “we look after each other”. By September, the proportion who opted for “we look after each other” had increased to 54%.

More than half (57%) reported an increased awareness of the living conditions of others, 77% feel that the pandemic has reminded us of our common humanity, and 62% feel they have the ability to change things around them – an increase of 15 points since February.

MICHAEL SAVAGE, ‘CULTURE WARS’ ARE FOUGHT BY TINY MINORITY – UK STUDY (THE OBSERVER)

As I keep on saying, those who believe in unfettered capitalism have to perpetuate a false narrative of competition in all things to justify their position. We have more things in common than differences, and I truly believe the collaboration is our default operating system.

Everything intercepts us from ourselves

Fighting health disinformation on Wikipedia

This is great to see:

As part of efforts to stop the spread of false information about the coronavirus pandemic, Wikipedia and the World Health Organization announced a collaboration on Thursday: The health agency will grant the online encyclopedia free use of its published information, graphics and videos.

Donald G. McNeil Jr., Wikipedia and W.H.O. Join to Combat Covid Misinformation (The New York Times)

Compared to Twitter's dismal efforts at fighting disinformation, the collaboration is welcome news.

The first W.H.O. items used under the agreement are its “Mythbusters” infographics, which debunk more than two dozen false notions about Covid-19. Future additions could include, for example, treatment guidelines for doctors, said Ryan Merkley, chief of staff at the Wikimedia Foundation, which produces Wikipedia.

Donald G. McNeil Jr., Wikipedia and W.H.O. Join to Combat Covid Misinformation (The New York Times)

More proof that the for-profit private sector is in no way more 'innovative' or effective than non-profits, NGOs, and government agencies.

Seeing through is rarely seeing into

Perceptions of the past

The History teacher in me likes this simple photo quiz site that shows how your perception of the past can easily be manipulated by how photographs are presented.

Gatekeepers of opportunity and the lottery of privilege

Despite starting out as a pejorative term, 'meritocracy' is something that, until recently, few people seem to have had a problem with. One of the best explanations of why meritocracy is a problematic idea is in this Mozilla article from a couple of years ago. Basically, it ascribes agency to those who were given opportunities due to pre-existing privilege.

In an interview with The Chronicle of Higher Education, Michael Sandel makes some very good points about the American university system, which can be more broadly applied to other western nations, such as the UK, which have elite universities.

The meritocratic hubris of elites is the conviction by those who land on top that their success is their own doing, that they have risen through a fair competition, that they therefore deserve the material benefits that the market showers upon their talents. Meritocratic hubris is the tendency of the successful to inhale too deeply of their success, to forget the luck and good fortune that helped them on their way. It goes along with the tendency to look down on those less fortunate, and less credentialed, than themselves. That gives rise to the sense of humiliation and resentment of those who are left out.

Michael Sandel, quoted in 'The Insufferable Hubris of the Well-Credentialed'

As someone who is reasonably well-credentialed, I nevertheless see a fundamental problem with requiring a degree as an 'entry-level' qualification. That's why I first got interested in Open Badges nearly a decade ago.

Despite the best efforts of the community, elite universities have a vested in maintaining the status quo. Eventually, the whole edifice will come crashing down, but right now, those universities are the gatekeepers to opportunity.

Society as a whole has made a four-year university degree a necessary condition for dignified work and a decent life. This is a mistake. Those of us in higher education can easily forget that most Americans do not have a four-year college degree. Nearly two-thirds do not.

[...]

We also need to reconsider the steep hierarchy of prestige that we have created between four-year colleges and universities, especially brand-name ones, and other institutions of learning. This hierarchy of prestige both reflects and exacerbates the tendency at the top to denigrate or depreciate the contributions to the economy made by people whose work does not depend on having a university diploma.

So the role that universities have been assigned, sitting astride the gateway of opportunity and success, is not good for those who have been left behind. But I’m not sure it’s good for elite universities themselves, either.

MICHAEL SANDEL, QUOTED IN 'THE INSUFFERABLE HUBRIS OF THE WELL-CREDENTIALED'

Thankfully, Sandel, has a rather delicious solution to decouple privilege from admission to elite universities. It's not a panacea, but I like it a first step.

What might we do about it? I make a proposal in the book that may get me in a lot of trouble in my neighborhood. Part of the problem is that having survived this high-pressured meritocratic gauntlet, it’s almost impossible for the students who win admission not to believe that they achieved their admission as a result of their own strenuous efforts. One can hardly blame them. So I think we should gently invite students to challenge this idea. I propose that colleges and universities that have far more applicants than they have places should consider what I call a “lottery of the qualified.” Over 40,000 students apply to Stanford and to Harvard for about 2,000 places. The admissions officers tell us that the majority are well-qualified. Among those, fill the first-year class through a lottery. My hunch is that the quality of discussion in our classes would in no way be impaired.

The main reason for doing this is to emphasize to students and their parents the role of luck in admission, and more broadly in success. It’s not introducing luck where it doesn’t already exist. To the contrary, there’s an enormous amount of luck in the present system. The lottery would highlight what is already the case.

MICHAEL SANDEL, QUOTED IN 'THE INSUFFERABLE HUBRIS OF THE WELL-CREDENTIALED'

Would people like me be worse off in a more egalitarian system? Probably. But that's kind of the point.

Tedious sports

This made me smile:

You can divide most sports into those that take place in the real world (road cycling, sailing, cross country running) and those that are played on the artificial space of a court or pitch. Some (golf, croquet) occupy an uncertain middle ground, which may be one of the reasons they are so tedious to watch. Others (football, rugby) started as the former and, as they were codified, became the latter.

Jon Day, Better on TV (London Review of Books)

Man is equally incapable of seeing the nothingness from which he emerges and the infinity in which he is engulfed

Biometric surveillance in a post-pandemic future

I woke up today to the news that, in the UK, the police will get access to to the data on people told to self-isolate on a 'case-by-case basis'. As someone pointed out on Mastodon, this was entirely predictable.

They pointed to this article by Yuval Noah Harari from March of this year, which also feels like a decade ago. In it, he talks about post-pandemic society being a surveillance nightmare:

You could, of course, make the case for biometric surveillance as a temporary measure taken during a state of emergency. It would go away once the emergency is over. But temporary measures have a nasty habit of outlasting emergencies, especially as there is always a new emergency lurking on the horizon. My home country of Israel, for example, declared a state of emergency during its 1948 War of Independence, which justified a range of temporary measures from press censorship and land confiscation to special regulations for making pudding (I kid you not). The War of Independence has long been won, but Israel never declared the emergency over, and has failed to abolish many of the “temporary” measures of 1948 (the emergency pudding decree was mercifully abolished in 2011). 

Yuval Noah Harari: the world after coronavirus (The Financial times)

Remember the US 'war on terror'? That led to an incredible level of domestic and foreign surveillance that was revealed by Edward Snowden a few years ago.

The trouble, though, is that health is a clear and visible thing, a clear and present danger. Privacy is more nebulous with harms often being in the future, so the trade-off is between the here and now and, well, the opposite.

Even when infections from coronavirus are down to zero, some data-hungry governments could argue they needed to keep the biometric surveillance systems in place because they fear a second wave of coronavirus, or because there is a new Ebola strain evolving in central Africa, or because . . . you get the idea. A big battle has been raging in recent years over our privacy. The coronavirus crisis could be the battle’s tipping point. For when people are given a choice between privacy and health, they will usually choose health.

YUVAL NOAH HARARI: THE WORLD AFTER CORONAVIRUS (THE FINANCIAL TIMES)

For me, just like Harari, the way that governments choose to deal with the pandemic shows their true colours.

The coronavirus epidemic is thus a major test of citizenship. In the days ahead, each one of us should choose to trust scientific data and healthcare experts over unfounded conspiracy theories and self-serving politicians. If we fail to make the right choice, we might find ourselves signing away our most precious freedoms, thinking that this is the only way to safeguard our health.

YUVAL NOAH HARARI: THE WORLD AFTER CORONAVIRUS (THE FINANCIAL TIMES)

Ethics is the result of the human will

Sabelo Mhlambi is a computer scientist, researcher and Fellow at Harvard’s Berkman Klein Center for Internet & Society. He focuses on the ethical implications of technology in the developing world, particularly in Sub-Saharan Africa, and has written a great, concise essay on technological ethics in relation to the global north and south.

Ethics is not missing in technology, rather we are witnessing the ethics in technology – the ethics of the powerful. The ethics of individualism.

Mhlambi makes a number of important points, and I want to share three of them. First, he says that ethics is the result of human will, not algorithmic processes:

Ethics should not be left to algorithmic definitions and processes, ultimately ethics is a result of the human will. Technology won’t save us. The abdication of social and environmental responsibility by creators of technology should not be allowed to become the norm.

Second, technology is a driver of change in society, and, because technology is not neutral, we have individualism baked into the tools we use:

Ethics describes one’s relationship and responsibilities to others and the environment. Ethics is the protocol for human interaction, with each other and with the world. Different ethical systems may be described through this scale: Individualistic systems promote one’s self assertion through the limitation of one’s relationship and responsibilities to others and the environment. In contrast, a more communal ethics asserts the self through the encouragement of one’s relationship and responsibilities to the community and the environment.

This is, he says, a form of colonialism:

Technology designed and deployed beyond its ethical borders poses a threat to social stability in different regions with different ethical systems, norms and values. The imposition of a society’s beliefs on another is colonial. This relationship can be observed even amongst members of the South as the more economically developed nations extend their technology and influence into less developed nations, the East to Africa relationship being an example.

Third, over and above the individualism and colonialism, the technologies we use are unrepresentative because they do not take into account the lived experiences and view of marginalised groups:

In the development and funding of technology, marginalized groups are underrepresented. Their values and views are unaccounted for. In the software industry marginalized groups make a minority of the labor force and leadership roles. The digital divide continues to increase when technology is only accessible through the languages of the well developed nations. 

It's an important essay, and one that I'll no doubt be returning to in the weeks and months to come.

Even while a thing is in the act of coming into existence, some part of it has already ceased to be

Forward momentum above all things

This page on a Brian Eno fan site was re-shared on Hacker News this week. It features text from an email from Eno himself, explaining why, although he's grateful that people want to discuss his work, he doesn't want to necessarily see it:

I think the reason I feel uncomfortable about such a thing is that it becomes a sort of weight on my shoulders. I start to feel an obligation to live up to something, instead of just following my nose wherever it wants to go at the moment. Of course success has many nice payoffs, but one of the disadvantages is that you start to be made to feel responsible for other people's feelings: what I'm always hearing are variations of "why don't you do more records like - (insert any album title) " or "why don't you do more work with - (insert any artist's name)?". I don't know why, these questions are un answerable, why is it so bloody important to you, leave me alone....these are a few of my responses. But the most important reason is "If I'd followed your advice in the first place I'd never have got anywhere".

Eno goes on to explain that being constantly reminded of your 'exhaust', of what you've already done isn't very conducive to future creative work:

I'm afraid to say that admirers can be a tremendous force for conservatism, for consolidation. Of course it's really wonderful to be acclaimed for things you've done - in fact it's the only serious reward, becasue it makes you think "it worked! I'm not isolated!" or something like that, and irt makes you feel gratefully connected to your own culture. But on the other hand, there's a tremendously strong pressure to repeat yourself, to do more of that thing we all liked so much. I can't do that - I don't have the enthusiasm to push through projects that seem familiar to me ( - this isn't so much a question of artistic nobility or high ideals: I just get too bloody bored), but at the same time I do feel guilt for 'deserting my audience' by not doing the things they apparently wanted. I'd rather not feel this guilt, actually, so I avoid finding out about situations that could cause it.

Finally, Eno explains that, just like everyone else, there are days when he wonders where the creative spark comes from:

The problem is that people nearly always prefer what I was doing a few years earlier - this has always been true. The other problem is that so, often, do I! Discovering things is clumsy and sporadic, and the results don't at first compare well with the glossy and lauded works of the past. You have to keep reminding yourself that they went through that as well, otherwise they become frighteningly accomplished. That's another problem with being made to think about your own past - you forget its genesis and start to feel useless awe toward syour earlier self "How did I do it? Wherever did these ideas come from?". Now, the workaday everyday now, always looks relatively less glamorous than the rose-tinted then (except for those magic mhours when your finger is right on the pulse, and those times only happen when you've abandoned the lifeline of your own history).

Being creative comes not from looking back, but looking forward. As the enigmatic Taylor, a character in the TV series Billions states in one episode, we should prize "forward momentum above all things".

We all think we are exceptional, and are surprised to find ourselves criticised just like anyone else

Scenario planning, climate change, and the pandemic

Tim O'Reilly is a funny character. Massively talented and influential, but his political views (broadly right libertarian) seem to mean he miss things when he's neverththeless heading in the right direction.

In a long article published recently, O'Reilly introduces his readers to scenario planning from a very US-centric point of view. It's also a position that, on first reading at least, is a bit techno-solutionist.

He starts by explaining that just because we date decades and centuries a particular way ("the 90s", "the twentieth century") it's actually cataclysmic events that define the start and end of eras:

So, when you read stories—and there are many—speculating or predicting when and how we will return to “normal”, discount them heavily. The future will not be like the past. The comfortable Victorian and Georgian world complete with grand country houses, a globe-spanning British empire, and lords and commoners each knowing their place, was swept away by the events that began in the summer of 1914 (and that with Britain on the “winning” side of both world wars.) So too, our comfortable “American century” of conspicuous consumer consumption, global tourism, and ever-increasing stock and home prices may be gone forever.

Tim O'Reilly, Welcome to the 21st Century: How To Plan For The Post-Covid Future

For me, the 21st century began on September 11th, 2001 with the twin towers attack. The aftermath of that, including the curtailing of our civil liberties in the west, has been a defining feature of the century so far.

O'Reilly, however, points to the financial crisis:

Our failure to make deep, systemic changes after the financial collapse of 2009, and our choice instead to spend the last decade cutting taxes and spending profusely to prop up financial markets while ignoring deep, underlying problems has only made responding to the current crisis that much more difficult. Our failure to build back creatively and productively from the global financial crisis is necessary context for the challenge to do so now.

Tim O'Reilly, Welcome to the 21st Century: How To Plan For The Post-Covid Future

All of these things compound one another, with financial uncertainty leading to political instability, and the election of populist leaders. That meant we were less prepared for the pandemic than we could have been, and when it hit, we've suffered (in the UK and US at least) from an incompetent response.

I recently finished reading A Distant Mirror: The Calamitous 14th Century by Barbara E. Tuchman, which discusses at length something that O'Reilly picks up on:

If you are a student of history, you know that the massive reduction of the workforce in post-Black Death Europe forced lords to give better terms of tenure—serfdom all but disappeared, and the rise of a mercantile middle class set the stage for the artistic and scientific progress of the Renaissance. Temporary, but catastrophic, events often usher in permanent economic changes. Sometimes the changes appear to be reversed but it just takes time for them to stick. World War II brought women into the workforce, and then victory ushered them back out. But the wine of opportunity, once tasted, was not left undrunk forever.

Tim O'Reilly, Welcome to the 21st Century: How To Plan For The Post-Covid Future

I'm hoping, like O'Reilly, that there are silver linings that come out of the pandemic related to climate change. Unlike him, I don't think the answer is more consumption. As an article I shared recently points out, not only can we not have billionaires and solve climate change, but the whole edifice of over-consumption needs to collapse under its own weight.

At times, our strengths propel us so far forward we can no longer endure our weaknesses and perish from them

Reducing exam stress by removing pointless exams

In the UK, it used to be the case that children could leave school at 16. This was the reason for 'O' levels (which my parents took), and GCSEs, which I sat at that age.

However, these days, young people must remain in education or training until they are 18 years old. What, then, is the point of taking exams aged 16 and 18?

A group of Tory MPs has written a report, with one of the authors, Flick Drummond, making some good points:

The paper argues that preparation for GCSE exams means that pupils miss a large chunk of valuable learning because of the time taken up with mock exams and revision, followed by the exams themselves. “That’s almost six months out of a whole year spent preparing for exams,” said Drummond.


She said she was particularly concerned by the impact of exams on mental health, citing a report backed by the Children’s Society in August that ranked England 36th out of 45 countries in Europe and North America for wellbeing.


Instead, the new report says, the exams should be replaced by a baccalaureate, which would cover several years’ study and would allow children more time from the age of 15 to settle on the subjects they wanted to study in the sixth form for A-levels or vocational qualifications such as T-levels and apprenticeships, and to explore potential careers in a structured way.

Richard Adams, Tory MPs back ditching GCSE exams in English school system overhaul (The Guardian)

As a parent of children who could be affected by this, I actually think this should be trialled first in the private sector and then rolled out in the state sector. Too often, the private sector benefits from treating state school pupils as guinea pigs, and then cherry-picking what works.

The clever man often worries; the loyal person is often overworked

Like the flight of a sparrow through a lighted hall, from darkness into darkness

Face-to-face university classes during a pandemic? Why?

Earlier in my career, when I worked for Jisc, I was based at Northumbria University in Newcastle. It's just been announced that 770 students there have been infected with COVID-19.

As Lorna Finlayson, a philosophy lecturer at the University of Essex, points out, the desire to get students on campus for face-to-face teaching is driven by economics. Universities are businesses, and some of them are likely to fail this academic year.

[A]fter years of pushing to expand online learning and “lecture capture” on the basis that it is what students want, university managers have decided that what students really want now, during a global pandemic, is face-to-face contact. This sudden-onset fetish reached its most perverse extreme in the case of Boston University, which, realising that many teaching rooms lack good ventilation or even windows, decided to order “giant air circulators”, only to discover that the air circulators were very noisy. Apparently unable to source enough “mufflers” for the air circulators, the university ordered Bluetooth headsets to enable students and teachers to communicate over the roar of machinery.

All of which raises the question: why? The determination to bring students back to campus at any cost doesn’t stem from a dewy-eyed appreciation of in-person pedagogy, nor from concerns about the impact of isolation on students’ mental health. If university managers had any interest in such things, they would not have spent years cutting back on study skills support and counselling services.

Lorna Finlayson, How universities tricked students into returning to campus (The Guardian)

I know people who work in universities in various positions. What they tell me astounds me; a callous disregard for human life in the pursuit of either economic survival, or profit.

This is, as usual, all about the money. With student fees and rents now their main source of revenue, universities will do anything to recruit and retain. When the pandemic hit, university managers warned of a potentially catastrophic loss of income from international student fees in particular. Many used this as an excuse to cut jobs and freeze pay, even as vice-chancellors and senior management continued to rake in huge salaries. As it turned out, international student admissions reached a record high this year, with domestic undergraduate numbers also up – perhaps less due to the irresistibility of universities’ “offer” than to the lack of other options (needless to say, staff jobs and pay have yet to be reinstated).


But students are more than just fee-payers. They are rent-payers too. Rightly or wrongly, most of those in charge of universities have assumed that only the promise of face-to-face classes would tempt students back to their accommodation. That promise can be safely broken only once rental contracts are signed and income streams flowing.

Lorna Finlayson, How universities tricked students into returning to campus (The Guardian)

I predict legal action at some point in the near future.

'Rulesy' people

Some people in the world want to fit in. Others want to change it. Still others want to fit in by changing it. Robin Hanson has a theory about how paternalism appears in a culture, linking it to a pattern of behaviours that bestows a form of prestige on those creating and enforcing rules.

The key idea is that there are many “rulesy” people in the world who specialize in learning of and even creating rules, so that they can then find and reveal violations of these rules around them. This allows them to beat on their rivals, and also to raise their own status. It obviously raises their dominance via the power they wield, but they prefer to be instead seen as prestigious, enforcing rules whose purpose is more clearly altruistic. And what could be more altruistic than keeping people from hurting themselves?

So many people who are especially good at noticing and applying rules, good at finding potential violations, good at framing situations as rule violations, and willing to at least gossip about violators, are eager for a supply of apparently-paternalism-motived rules they can enforce. So they take suggestions by elites regarding what is good behavior and work to turn them into rules they can enforce. They push to turn norms into laws, and to make norms out of the weak behavior patterns of elites, or common sorts of praise and criticism.

Robin Hanson, Rulesy Folks Push Paternalism (Overcoming Bias)

I like Hanson's explanation of how this can work in practice:

For example, maybe at first some elites sometimes wear hats. Then they and others start to praise hat-wearers. Then more folks start to wear hats, and get proud of how they are good hat people. Good candidates for promotion to elite they are. Then hat fans start to insinuate that people who don’t wear hats are not the best sort of people in various ways, and are only hurting themselves. They say that word needs to get out about the advantages of hats. And those irresponsible people arguing against hats really need to be dealt with – everyone should be told that their arguments don’t meet the highest possible standards of scientific rigor. (Though neither do pro-hat arguments.)


It becomes a matter of pride to teach your children to wear hats. And to have hats taught in school. And to include the lack of hats in lists of problems that problem people have. Hat fans start to push the orgs of which they are part to promote hats, sometimes even requiring hats at org functions. Finally it is suggested that wouldn’t it be simpler and more efficient to just have the government require hats. Then foreigners who visit us won’t think we are such backward non-hat people. And its really for their own good, as we all know.

At every step along this path, people can gain by pushing for stricter and stronger hat norms and rules. They are good people, pushing a good thing, which just happens to let them dump harder on rivals. Which is plausibly why we tend to end up with just too many overly restrictive rules. Rules rise with the ratchet of crises that can be blamed on problems said to be fixed by adding new rules. But between the crises, we rarely take away or weaken our rules.

Robin Hanson, Rulesy Folks Push Paternalism (Overcoming Bias)

The importance of co-operation

Quoting Stephen Downes in the introduction to his post, Harold Jarche goes on to explain:

Managing in complex adaptive systems means influencing possibilities rather than striving for predictability (good or best practices). Cooperation in our work is needed so that we can continuously develop emergent practices demanded by this complexity. What worked yesterday won’t work today. No one has the definitive answer any more, but we can use the intelligence of our networks to make sense together and see how we can influence desired results. This is cooperation and this is the future, which is already here, albeit unevenly distributed.

Harold Jarche, revisiting cooperation

It's all very well having streamlined workflows, but that's the way to get automated out of a job.

One is not superior merely because one sees the world in an odious light

How to give advice

A great metaphor from a fantastic article:

Suppose you are holding a ball in your hand inside a moving train. From your frame of reference, the ball is static. But from somebody else’s perspective, one who looks at you from outside the train, it’s a completely different picture. They see what you cannot see. Advice helps us realise that the ball, along with you, is moving at the speed of the train.

Abhishek Chakraborty, Giving Advice is Not Giving Solutions

I'm definitely guilty of giving people solutions when they just need me to help them with seeing things from a different angle 🤔

The truth is too simple: one must always get there by a complicated route

The crisis in professional sport is one of its own making

I couldn't agree more with this analysis from Barney Ronay, one of my favourite sports writers:

Professional sport is facing a crisis of unprecedented urgency. It must be prepared to face it largely alone.

At which point it is worth being clear on exactly what is at stake. This is a moment of peril that should raise questions far beyond simply survival or sustaining the status quo. Questions such as: what is sport actually for? And more to the point, what do we want it to look like when this is all over?

It helps to define the terms of all this jeopardy. There has been a lot of emotive rhetoric about sport being on the verge of extinction, its very existence in doubt, as though the basic ability to participate, support and spectate could be vaporised out from beneath us.

This is incorrect. What is being menaced is the current financial management of professional sport, its existing models and cultural practices, much of which is pretty joyless and dysfunctional in the first place.

Barney Ronay, Never waste a crisis: Covid-19 trauma can force sport to change for good (The Guardian)

Was sport less enjoyable before loads of money was thrown at it? As Ronay points out, Gareth Bale earning £600,000 per week "could keep every club in League Two in business by paying their combined wage bill out of his annual salary".

I'm not sure the current model is sustainable, so if the pandemic forces a rethink, I'm all for it.

If you don’t know what you’re doing, you can be very creative about it

The discourse of disruption

Adrian Daub, a professor of literature, takes issue with the tech sector's focus on disruption:

Most of the discourse around disruption clearly draws on the idea of creative destruction, but it shifts it in important respects. It doesn’t seem to suggest that ever-intensifying creative destruction will eventually lead to a new stability – that hyper-capitalism almost inevitably pushes us toward something beyond capitalism. Instead, disruption seems to suggest that the instability that comes with capitalism is all there is and can be – we might as well strap in for the ride. Ultimately, then, disruption is newness for people who are scared of genuine newness. Revolution for people who don’t stand to gain anything by revolution.

Indeed, there is an odd tension in the concept of disruption: it suggests a thorough disrespect towards whatever existed previously, but in truth it often seeks to simply rearrange whatever exists. Disruption is possessed of a deep fealty to whatever is already given. It seeks to make it more efficient, more exciting, more something, but it never ever wants to dispense altogether with what’s out there. This is why its gestures are always radical, but its effects never really upset the apple cart: Uber claims to have “revolutionised” the experience of hailing a cab, but really that experience has largely stayed the same. What it managed to get rid of were steady jobs, unions and anyone other than Uber making money on the whole enterprise.

Adrian Daub, The disruption con: why big tech’s favourite buzzword is nonsense (The Guardian)

Venture-capital backed tech companies providing profits through (what I call) 'software with shareholders' fracture our societies, destroy our communities, and enrich the privileged.

Let's talk

Wise words from Seth Godin:

Universities and local schools are in crisis with testing in disarray and distant learning ineffective…

[When can we talk about what school is for?]

It’s comfortable to ignore the system, to assume it is as permanent as the water surrounding your goldfish. But the fact that we have these tactical problems is all the evidence we need to see that something is causing them, and that spending time on the underlying structure could make a difference.

Seth Godin, When can we talk about our systems?

It's not just education, or racism, or healthcare, or any of the other things he lists. Organisations are made up of people, and most people don't like conflict.

As a result, we get a constant barrage of tactical responses to emergent situations, rather than focusing on strategies that would prevent them.

The more time we spend on purposeful reflection, the less time we spend putting out fires.

An ounce of good sense is worth a pound of subtlety

Entirely predictable

We've had some pretty bad governments in the UK during my lifetime, but has any been so underqualified, so inept, corrupt, and nepotistic as our current one? It would be bad enough in regular times, but during a pandemic it's a tragedy.

Who knew that children go to school in September? Who guessed that hundreds of thousands of students head to universities where they – and easily shocked readers should look away – strive with every fibre of their being to mingle with each other as vigorously as they can? What clairvoyant might have predicted that, when the government offered the public cut-price restaurant meals at the taxpayers’ expense, the public would gobble them up? Or that, when the prime minister urged workers to go back to their offices and save Pret a Manger, a few brave souls would have returned to their desks and risked having “dulce et decorum est pro Pretia mori” carved on their gravestones?

Nick Cohen, The meritocracy has had its day. How else to explain the rise of Dido Harding? (The Observer)

Nothing will ever be attempted, if all possible objections must be first overcome

Facebook Accused of Watching Instagram Users Through Cameras (The Verge)

In the complaint filed Thursday in federal court in San Francisco, New Jersey Instagram user Brittany Conditi contends the app’s use of the camera is intentional and done for the purpose of collecting “lucrative and valuable data on its users that it would not otherwise have access to.”


Facebook Has Been a Disaster for the World (The New York Times)

Facebook has been incredibly lucrative for its founder, Mark Zuckerberg, who ranks among the wealthiest men in the world. But it’s been a disaster for the world itself, a powerful vector for paranoia, propaganda and conspiracy-theorizing as well as authoritarian crackdowns and vicious attacks on the free press. Wherever it goes, chaos and destabilization follow.


Kim Kardashian West joins Facebook and Instagram boycott (BBC News)

I can't sit by and stay silent while these platforms continue to allow the spreading of hate, propaganda and misinformation - created by groups to sow division and split America apart,” Kardashian West said.


Quotation-as-title from Dr Johnson.

Privilege and pandemic

To the left, a chessboard strewn with bloodied, dead chesspieces. To the right, a small table is set for dinner with wine: the king and queen pieces of both sides of the chessboard stand at the table together, ready to enjoy a meal. (via Cathal Garvey)

I found this via Mastodon and immediately had to post it here. I'm not sure about the original artist, but it struck me as capturing our current moment rather well.

The future of closed, proprietary technology is within your body

Referencing a recent article in The New York Times, and using a metaphor from his honeymoon in Cancun, Purism's Chief Security Officer raises some important questions about the closed/open future of technology:

Think about the future of computers over the next fifty years. Computers will become even more ubiquitous, not just embedded in all of the things around us, but embedded inside us. With advances in neural-computer interfaces, there is a high likelihood that we will be connecting computers directly to our brains within our lifetimes. Which tech company would you trust to control your neural implant?

If a computer can read and write directly to your brain, does it change how you feel about vendors controlling which software you can use or whether you can see the code? Does it change how you feel about vendors subsidizing hardware and software with ads or selling data they access through your computer? Does it change how you feel about government regulation of technology?

Kyle Rankin, Tourists on Tech's Toll Roads

Pandemic microaggressions

This article primarily focuses on racism and intolerance to gender differences, but even as a "white, male... heterosexual, cisgender, able-bodied, wealthy, and educated" man, I recognise some of what it describes.

The COVID-19 pandemic has opened much of our workforce to a new surge of microaggressions by making coworkers as unwelcome guests in their homes through video meetings. Bosses and coworkers can see our families and furniture. They can hear the background noise from our neighborhoods. They see us with our hair, faces, and clothes less put together than usual due to the closure of the shops and salons that help us assimilate into the mainstream world.

Sarah Morgan, How microaggressions look different when we’re working remotely (Fast Company)

There's a line, I think between friendly banter and curiosity and, for example, being reminded on a daily basis that I'm getting ever more grey, that I'm looking tired, and my forehead is shinier than a billiard ball.

Microaggressions? Perhaps. But on days when I'm not feeling 100%, it sure does grind me down.

The most radical thing you can do is stay home

Consensus, legitimate controversy, and deviance

My go-to explanation of acceptable political opinions is usually the Overton Window, but this week I came across Hallin's spheres:

Hallin's spheres is a theory of media objectivity posited by journalism historian Daniel C. Hallin in his book The Uncensored War to explain the coverage of the Vietnam war. Hallin divides the world of political discourse into three concentric spheres: consensus, legitimate controversy, and deviance. In the sphere of consensus, journalists assume everyone agrees. The sphere of legitimate controversy includes the standard political debates, and journalists are expected to remain neutral. The sphere of deviance falls outside the bounds of legitimate debate, and journalists can ignore it. These boundaries shift, as public opinion shifts.

Wikipedia

I think the interesting thing right now for either theory is that most people have their news filtered by social networks. As a result, it's not (just) journalists doing the filtering, but people in affinity groups.

One nation under Zuck

This image, from Grayson Perry, is incredible. As he points out in the accompanying article, he's chosen the US due to an upcoming series of his, but geographically this could be anywhere, as culture wars these days happen mainly online.

I've added the emphasis in the quotation below:

When we experience a background hum of unfocused emotion, be it anxiety, sadness, fear, anger, we unconsciously look for something to attach it to. Social media is brilliant at supplying us with issues to which attach our free-floating feelings. We often look for nice, preformed boxes into which we can dump our inchoate feelings, we crave certainty. Social media constantly offers up neat solutions for our messy feelings, whether it be God, guns, Greta or gender identity.

In a battle-torn landscape governed by zeroes and ones, nuance, compromise and empathy are the first casualties. If I were to sum up the online culture war in one word it would be “diaphobia”, a term coined by the psychiatrist RD Laing meaning “fear of being influenced by other people”, the opposite of dialogue. Our ever-present underlying historical and enculturated emotions will nudge us to cherrypick and polish the nuggets of information that support a stance that may have been in our bodies from childhood. Once we have taken sides, the algorithms will supply us with a stream of content to entrench and confirm our beliefs.

Grayson Perry, Be it on God, guns or Greta, social media offers neat solutions for our messy feelings (The Guardian)

Things Come Apart

Exploded image of old rotary phone
/via Todd McLellan, Things Come Apart

More advice on perfectionism

A few years ago I read Anne Lamott's Bird by Bird, which is even better than people say. I was reminded of this quotation via Oliver Burkeman's Help! How to Become Slightly Happier and Get a Bit More Done.

Perfectionism is the voice of the oppressor, the enemy of the people. It will keep you cramped and insane your whole life... perfectionism is based on the obsessive belief that if you run carefully enough, hitting each stepping stone just right, you won't have to die. The truth is that you will die anyway and that a lot of people who aren't even looking at their feet are going to do a whole lot better than you.

Anne Lamott, Bird by Bird

To be happy, we must not be too concerned with others

'Recycling' plastic is an oil industry scam

This NPR article about the oil industry's cynical manipulation of us when it comes to recycling plastic blew my mind 🤯

Here's the basic problem: All used plastic can be turned into new things, but picking it up, sorting it out and melting it down is expensive. Plastic also degrades each time it is reused, meaning it can't be reused more than once or twice.

On the other hand, new plastic is cheap. It's made from oil and gas, and it's almost always less expensive and of better quality to just start fresh.

Laura Sullivan, How Big Oil Misled The Public Into Believing Plastic Would Be Recycled (NPR)

Now that China isn't accepting the world's plastic for 'recycling' (i.e. landfill) domestic initiatives have a problem.

The industry's awareness that recycling wouldn't keep plastic out of landfills and the environment dates to the program's earliest days, we found. "There is serious doubt that [recycling plastic] can ever be made viable on an economic basis," one industry insider wrote in a 1974 speech.


Yet the industry spent millions telling people to recycle, because, as one former top industry insider told NPR, selling recycling sold plastic, even if it wasn't true.

"If the public thinks that recycling is working, then they are not going to be as concerned about the environment," Larry Thomas, former president of the Society of the Plastics Industry, known today as the Plastics Industry Association and one of the industry's most powerful trade groups in Washington, D.C., told NPR.

Laura Sullivan, How Big Oil Misled The Public Into Believing Plastic Would Be Recycled (NPR)

The world really is monumentally screwed every which way at the moment. And I feel like an absolute chump for being in any way enthusiastic about at-home recycling.

Lifequakes

One way of thinking about the pandemic is as inevitable, and just one of a series of life-changing events that will happen to you during your time on earth.

Whereas some people seem to think that life should be trouble- and pain-free, it's clear by even a cursory glance at history that this an impossible expectation.

This article is a useful one for reframing the pandemic as a change that we're literally all going through together, but which will affect us differently:

Transitions feel like an abnormal disruption to life, but in fact they are a predictable and integral part of it. While each change may be novel, major life transitions happen with clocklike regularity. Life is one long string of them, in fact. The author Bruce Feiler wrote a book called Life Is in the Transitions: Mastering Change at Any Age. After interviewing hundreds of people about their transitions, he found that a major change in life occurs, on average, every 12 to 18 months. Huge ones—what Feiler calls “lifequakes”—happen three to five times in each person’s life. Some lifequakes are voluntary and joyful, such as getting married or having a child. Others are involuntary and unwelcome, such as unemployment or life-threatening illness.

Arthur C. Brooks, The Clocklike Regularity of Major Life Changes (The Atlantic)

As scarce as truth is, the supply has always been in excess of demand

Inside your pain are the things you care about most deeply

I listened to this episode of The Art of Manliness podcast a while back on Acceptance and Commitment Therapy (ACT) and found it excellent. I've discussed ACT with my CBT therapist who says it can also be a useful approach.

My guest today says we need to free ourselves from these instincts and our default mental programming and learn to just sit with our thoughts, and even turn towards those which hurt the most. His name is Steven Hayes and he’s a professor of psychology, the founder of ACT — Acceptance and Commitment Therapy — and the author of over 40 books, including his latest 'A Liberated Mind: How to Pivot Toward What Matters'. Steven and I spend the first part of our conversation in a very interesting discussion as to why traditional interventions for depression and anxiety — drugs and talk therapy — aren’t very effective in helping people get their minds right, and how ACT takes a different approach to achieving mental health. We then discuss the six skills of psychological flexibility that undergird ACT and how these skills can be used not only by those dealing with depression and anxiety but by anyone who wants to get out of their own way and show up and move forward in every area of their lives.

Something that Hayes says is that "if people don't know what their values are, they take their goals, the concrete things they can achieve, to be their values". This, he says, is why rich people can still be unfulfilled.

Well worth a listen.

The world needs less philanthropy and more equality

I've been skeptical about the motives of philanthropic organisations for a while now. This article in The Guardian is a long read, but worth it.

Here's an excerpt:

The common assumption that philanthropy automatically results in a redistribution of money is wrong. A lot of elite philanthropy is about elite causes. Rather than making the world a better place, it largely reinforces the world as it is. Philanthropy very often favours the rich – and no one holds philanthropists to account for it.

The role of private philanthropy in international life has increased dramatically in the past two decades. Nearly three-quarters of the world’s 260,000 philanthropy foundations have been established in that time, and between them they control more than $1.5tn. The biggest givers are in the US, and the UK comes second. The scale of this giving is enormous. The Gates Foundation alone gave £5bn in 2018 – more than the foreign aid budget of the vast majority of countries.

Philanthropy is always an expression of power. Giving often depends on the personal whims of super-rich individuals. Sometimes these coincide with the priorities of society, but at other times they contradict or undermine them. Increasingly, questions have begun to be raised about the impact these mega-donations are having upon the priorities of society.

To be in process of change is not an evil, any more than to be the product of change is a good

Marcus Aurelius on troubles

I really needed to read the following quotation this morning:

Everything that happens is as normal and expected as the spring rose or the summer fruit; this is true of sickness, death, slander, intrigue, and all the other things that delight or trouble foolish men.

Marcus Aurelius

Thinking about the trials and tribulations a Roman emperor must have gone through puts my tiny problems into a bit of perspective.

Enforced idleness

Some people think it's the Protestant work ethic, others that it's a genetic predisposition. Me? I think it's to do with the highly competitive nature of western societies.

Whatever you think causes it, the inability of adults, including myself, to spend a day doing nothing is kind of problematic. It's something I often discuss with Laura Hilliger (and she refers to it regularly in her excellent newsletter)

There's a university in Hamburg, Germany, giving out 'idleness grants' for people to do absolutely nothing. Emma Beddington's answers to the questions on the application form aren't too different to how I'd answer:

What do you not want to do? I want not to compare my achievements, or lack of them, with others’. If successful, for the duration of my idleness grant I will crush the exhausting running mental commentary that points out what those with energy, drive and ambition are achieving and enumerates my inadequacies. When one or other of my nemeses tweets the dread phrase “some personal news” (always the precursor to an announcement of professional glory), I will not feel bad, because I will have accepted that “being quite lazy” has inherent merit in 2020.

Emma Beddington, Doing nothing is so easy for me. But how to feel good about it? (The Guardian)

It's always possible to do more and be more, but sometimes it's important to just spend time being who you already are.

What is above knows what is below, what is below does not know what is above

There is something very strange about walking up mountains only to come back down again. But I love it, as did the French surrealist poet, philosopher, and novelist René Daumal:

You cannot always stay on the summits. You have to come down again…

So what’s the point? Only this: what is above knows what is below, what is below does not know what is above. While climbing, take note of all the difficulties along your path. During the descent, you will no longer see them, but you will know that they are there if you have observed carefully.

René Daumal, via Brain Pickings

While you're in the midst of self-imposed adversity you can also escape your self-imposed psychic prison.

The way to get things done is not to mind who gets the credit of doing them

Perfectionism is more toxic than you imagine

As someone who struggles with perfectionism on a daily basis, I needed to read this morning:

Perfectionism is more toxic than you imagine. Watch yourself and notice how often you’re being a perfectionist without even realising it. And see how it chips away at your happiness.

Rebecca Toh, ten recent thoughts

The other thoughts in the list are also worth reflecting on, especially the one about writing being the medium of learning.

Rethinking human responses to adversity

As a parent and former teacher I can get behind this:

ADHD is not a disorder, the study authors argue. Rather it is an evolutionary mismatch to the modern learning environment we have constructed. Edward Hagen, professor of evolutionary anthropology at Washington State University and co-author of the study, pointed out in a press release that “there is little in our evolutionary history that accounts for children sitting at desks quietly while watching a teacher do math equations at a board.”

Alison Escalante, What If Certain Mental Disorders Are Not Disorders At All?, Psychology Today

This is a great article based on a journal article about PTSD, depression, anxiety, and ADHD. As someone who has suffered from depression in the past, and still deals with anxiety, I absolutely think it has an important situational aspect.

That is to say, instead of just medicating people, we need to be thinking about their context.

[T]he stated goal of the paper is not to suddenly change treatments, but to explore new ways of studying these problems. “Research on depression, anxiety, and PTSD, should put greater emphasis on mitigating conflict and adversity and less on manipulating brain chemistry.”

Alison Escalante, What If Certain Mental Disorders Are Not Disorders At All?, Psychology Today

85 megapixel photo of the moon

Incredible.

/via ajamesmccarthy on Reddit

Pandemic-induced awkwardness

By this point in the year, I would have travelled away from my home office at least once per month to see real, live 3D human beings who aren't other members of my family.

Even if you are ensconced in a pandemic pod with a romantic partner or family members, you can still feel lonely — often camouflaged as sadness, irritability, anger and lethargy — because you’re not getting the full range of human interactions that you need, almost like not eating a balanced diet. We underestimate how much we benefit from casual camaraderie at the office, gym, choir practice or art class, not to mention spontaneous exchanges with strangers.

Kate Murphy, We’re All Socially Awkward Now, The New York Times

As the author points out, our skills can atrophy just like muscles if we don't use them, and interacting via screens is often quite different to interacting offline.

What man of energy does not find inactivity a punishment?

Some changes to Thought Shrapnel

TL;DR: Going forward, Thought Shrapnel will be a bit more random.


One of the benefits of a pause in doing something for a while is that you get to reflect on its upsides and downsides. We've all had a chance to do this during the pandemic, to re-evaluate what we do and why we do it.

Every year, I take a couple of months off Thought Shrapnel, which allows me to recharge myself a bit and commit myself anew to the project. Usually, I come back raring to go and, indeed, have written some stuff to publish as soon as I'm back.

This time, though, was different. I think that's for a couple of reasons:

  • The #100DaysToOffload challenge has got me writing regularly on my personal blog again.
  • Having supporters puts pressure on me to 'produce' something worthwhile, when this was supposed to be a space for stuff 'going in and out of my brain'.

So, with huge thanks to those people who have supported Thought Shrapnel over the past couple of years, I've decided that I'd actually prefer to not have the pressure of patronage. As such I'm deleting my Patreon account.

I'm keeping the weekly newsletter, for the moment at least, which will probably evolve into a slightly different format than it has been. Bear with me as things might look a bit strange around here while I move things around.

If you like my writing, you might want to head over to dougbelshaw.com/feeds which is where you can see the latest posts from the various places I write. I'm still posting updates to Twitter, but am only interacting with people via Mastodon and LinkedIn these days.

Again, thanks to everyone who has supported Thought Shrapnel with their attention and, in some cases, money over the years. It's still going, it's just changing along with me...


Image by Denny Luan

Saturday spinnings

As usual, I'm taking a month off Thought Shrapnel duties during the month of August. So this is my last post for a few weeks.

In the meantime, consider deactivating your Facebook, Instagram and Twitter accounts. See how it makes you feel, and perhaps I'll run into you on the Fediverse? (start here)


Sinead Bovell

I Am a Model and I Know That Artificial Intelligence Will Eventually Take My Job

There are major issues of transparency and authenticity here because the beliefs and opinions don’t actually belong to the digital models, they belong to the models’ creators. And if the creators can’t actually identify with the experiences and groups that these models claim to belong to (i.e., person of color, LGBTQ, etc.), then do they have the right to actually speak on those issues? Or is this a new form of robot cultural appropriation, one in which digital creators are dressing up in experiences that aren’t theirs?

Sinead Bovell (Vogue)

This is an incredible article that looks at machine learning and AI through the lens of an industry I hadn't thought of as being on the brink of being massively disrupted by technology.


How Capitalism Drives Cancel Culture

It is strange that “cancel culture” has become a project of the left, which spent the 20th century fighting against capricious firings of “troublesome” employees. A lack of due process does not become a moral good just because you sometimes agree with its targets. We all, I hope, want to see sexism, racism, and other forms of discrimination decrease. But we should be aware of the economic incentives here, particularly given the speed of social media, which can send a video viral, and see onlookers demand a response, before the basic facts have been established. Afraid of the reputational damage that can be incurred in minutes, companies are behaving in ways that range from thoughtless and uncaring to sadistic.

[...]

If you care about progressive causes, then woke capitalism is not your friend. It is actively impeding the cause, siphoning off energy, and deluding us into thinking that change is happening faster and deeper than it really is. When people talk about the “excesses of the left”—a phenomenon that blights the electoral prospects of progressive parties by alienating swing voters—in many cases they’re talking about the jumpy overreactions of corporations that aren’t left-wing at all.

Helen Lewis (The Atlantic)

Cancel culture is problematic, and mainly because of the unequal power structures involved. This is an important read. See also this article by Albert Wenger which has some suggestions towards the end in this regard.


Woman working at a laptop

How to Stay Productive When the World Is on Fire

The goal of productivity is to get the things you have to get done finished so you can spend more time on the things you want to do. Don’t fall into the busy trap, where you judge your self-worth by how productive you are or how much you’ve contributed to your company or manager. We’re all just trying to keep our heads above water. I hope these tips will help you do the same.

Alan Henry (WIRED)

As I wrote yesterday on my personal blog, I have a bit of an issue with perfectionism. So this reminder, along with the other great advice in the article, was a timely reminder.


Why you should be thanking your employees more often

If you treat somebody with disdain, of course, you give that person a psychological incentive to diminish your opinion and to want you to be less powerful. Inversely, if you demonstrate understanding and appreciation of someone’s contribution, you create a psychological incentive in the individual to give greater weight to your opinion. And that person will want to strengthen the weight of your opinion in the eyes of others. Appreciation and gratitude breed appreciation and gratitude.

Bruce Tulgan (Fast Company)

Creating a productive, psychologically safe, and emotionally intelligent environment means thanking people for the work they do. That means for their day-to-day activities, not just when they put in a herculean effort. A paycheck is not thanks enough for the work we do and the value we provide.


Old blue boat

Nostalgia reimagined

More interesting still is that nostalgia can bring to mind time-periods we didn’t directly experience. In the film Midnight in Paris (2011), Gil is overwhelmed by nostalgic thoughts about 1920s Paris – which he, a modern-day screenwriter, hasn’t experienced – yet his feelings are nothing short of nostalgic. Indeed, feeling nostalgic for a time one didn’t actually live through appears to be a common phenomenon if all the chatrooms, Facebook pages and websites dedicated to it are anything to go by. In fact, a new word has been coined to capture this precise variant of nostalgia – anemoia, defined by the Urban Dictionary and the Dictionary of Obscure Sorrows as ‘nostalgia for a time you’ve never known’.

How can we make sense of the fact that people feel nostalgia not only for past experiences but also for generic time periods? My suggestion, inspired by recent evidence from cognitive psychology and neuroscience, is that the variety of nostalgia’s objects is explained by the fact that its cognitive component is not an autobiographical memory, but a mental simulation – an imagination, if you will – of which episodic recollections are a sub-class.

Nigel Warburton (Aeon)

In the UK at least, shows like Downton Abbey and Call The Midwife are popular. My view of this is that, as this article would seem to support, it's a kind of nostalgia for a time that was imagined to be better.

There's a sinister side to this, as well. This kind of nostalgia seems to be particularly prevalent among more conservative-leaning (white) people harking back to a time of greater divisions in society along race and class lines. I think it's rather disturbing.


The World Is Noisy. These Groups Want to Restore the Quiet

Quiet Parks International (QPI) is a nonprofit working to establish certification for quiet parks to raise awareness of and preserve quiet places. The fledgling organization—whose members include audio engineers, scientists, environmentalists, and musicians—has identified at least 262 sites worldwide, including 30 in the US, that it believes are quiet or could become so with management changes....

QPI has no regulatory authority, but like the International Dark Sky Association’s Dark Sky Parks initiative, the nonprofit believes its certification—granted only after a detailed, three-day sound analysis—can encourage public support of preservation efforts and provide guidelines for protection. “The places that are quiet today … are basically leftovers—places that are out of the way,” Quiet Parks cofounder Gordon Hempton says.

Jenny Morber (WIRED)

I live in a part of the world close to both a designated Dark Sky Park and mountains into which I can escape. Light and noise pollution threaten both of them, so I'm glad to hear of these efforts.


Header image by Uillian Vargas

Saturday sailings

I deactivated my Twitter account this week. I've done that before, but this time I'm honestly not sure if I'll reactivate it.

Given that I get a fair few links through Twitter, I wonder if the kind of things I share in these weekly link roundups will change? We shall see, I guess. You can connect with me via the Fediverse: https://mastodon.social/@dajbelshaw


33 Myths of the System (book cover)

33 Myths of the System

Drawing on the entire history of radical thought, while seeking to plumb their common depths, 33 Myths of the System, presents a synthesis of independent criticism, a straightforward exposure of the justifications of the world-system, along with a new way to perceive and understand the unhappy supermind that directs, penetrates and even lives our lives.

Darren Allen

While I didn't agree with absolutely everything in this free e-book, it's fair to say it blew my mind. Highly recommended, especially for thoughtful people. One of the best things I've read in the last decade in terms of getting me to question... everything.


A catastrophe at Twitter

In any case, Twitter’s response to the incident offered further cause for distress. The company’s initial tweet on the subject said almost nothing, and two hours later it had followed only to say what many users were forced to discover for themselves: that Twitter had disabled the ability of many verified users to tweet or reset their passwords while it worked to resolve the hack’s underlying cause.

The near-silencing of politicians, celebrities, and the national press corps led to much merriment on the service — see this, along with Those good tweets below, for some fun — but the move had other, darker implications. Twitter is, for better and worse, one of the world’s most important communications systems, and among its users are accounts linked to emergency medical services. The National Weather Service in Lincoln, IL, for example, had just tweeted a tornado warning before suddenly going dark. To the extent that anyone was relying on that account for further information about those tornadoes, they were out of luck.

Casey Newton (The INterface)

I didn't actually deactivate my Twitter account because of the hack — that was actually more to do with the book mentioned above — but as a verified user, this certainly reinforced my decision. Just a reminder that at least one person with nuclear codes uses Twitter as their primary means of communication.


This is Fine: Optimism & Emergency in the P2P Network

Centralised platforms crave data collection and thirst for trust from the communities they seek to exploit. These platforms sell bloated, overpowered hardware that cannot be repaired, vulnerable to drops in consumer spending or spasms in the supply chain. They anxiously eye legislation to compel encryption backdoors, which will further weaken the trust they need so badly. They wobble beneath network disruptions (such as the worldwide slowdowns in March under COVID-19 load surges) that incapacitate cloud-dependent devices. They sleep with one eye open in countries where authoritarian governments compel them or their employees to operate as an informal arm of enforcement. These current trajectories point to the accelerating erosion of centralised platform power.

Cade Diehm (The New Design Congress)

This is an incredible article that's very well presented. I keep talking about the importance of decentralisation, and this article backs that up — but also explains how and why decentralised social networks need to do better.


CRT monitors on shelves

Our remote work future is going to suck

While the upsides to remote work are true, for many people remote work is a poison pill — one where you are given “control” in the name of productivity in exchange for some pretty nasty long-term effects.

In reality, remote work makes you vulnerable to outsourcing, reduces your job to a metric, creates frustrating change-averse bureaucracies, and stifles your career growth. The lack of scrutiny our remote future faces is going to result in frustrated workers and ineffective companies.

Sean Blanda

I'm a proponent of remote work, but I was nodding along to many of the points made in this post. Context is everything, and there's something to be said about being able to go home to escape work.


CO2 emissions on the web

Your content site probably doesn’t need JavaScript. You probably don’t need a CSS framework. You probably don’t need a custom font. Use responsive images. Extend your HTTP cache lifetimes. Use a static site generator or wp2static.com instead of dynamically generating each page on the fly, despite never changing. Consider ditching that third-party analytics service that you never look at anyway, especially if they also happen to sell ads. Run your website through websitecarbon.com. Choose a green web host.

Danny van Kooten

This week I changed the theme over at my personal blog to one that is much lighter. When I shared what I'd done on Mastodon, someone commented that they didn't think it would make that much difference. This post was written by someone who popped up to rebut what they said.


Ask a Sane Person: Jia Tolentino on Practicing the Discipline of Hope

INTERVIEW: What has this pandemic confirmed or reinforced about your view of society?

TOLENTINO: That capitalist individualism has turned into a death cult; that the internet is a weak substitute for physical presence; that this country criminally undervalues its most important people and its most important forms of labor; that we’re incentivized through online mechanisms to value the representation of something (like justice) over the thing itself; that most of us hold more unknown potential, more negative capability, than we’re accustomed to accessing; that the material conditions of life in America are constructed and maintained by those best set up to exploit them; and that the way we live is not inevitable at all.  

Christopher Bollen

I have to confess to not knowing who Jia Tolentino was before stumbling across this via the Hurry Slowly newsletter (although I must have read her writing before). This is a fantastic interview, which you should read in its entirety.


Header image by Fab Lentz

Friday fadings

I'm putting this together quickly before heading off to the Lake District camping with my son for a couple of nights. I'm pretty close to burnout with all of the things that have happened recently, so need some time on top of mountains and under the stars 🏕️


Slack verticals vs Microsoft

The Slack Social Network

Slack Connect is about more than chat: not only can you have multiple companies in one channel, you can also manage the flow of data between different organizations; to put it another way, while Microsoft is busy building an operating system in the cloud, Slack has decided to build the enterprise social network. Or, to put it in visual terms, Microsoft is a vertical company, and Slack has gone fully horizontal.

Ben Thompson (Stratechery)

The difference between consulting full-time now versus when I last did it in 2017 is that everyone adds you to their Slack workspace. This is simultaneously fantastic and terrible. What's being described here is more on the 'Work OS' stuff I shared in last week's link roundup.

See also Stephen Downes' commentary on mini-apps that perform particular functions inside other apps.


Only 9% of visitors give GDPR consent to be tracked

Advertising funded businesses are aware that the minority of visitors want to give consent.

They are simply riding the ad train and milking the cash cow for as long as they can get away with before GDPR gets enforced and they either shut down, adapt to a more sustainable business model or explore even more privacy invasive practices.

And the alternative to the advertising-funded web? Charge for services. And have your premium subscribers fund the free plans.

Marko Saric

This is interesting, and backs up the findings in this journal article about the 'dark patterns' prevalent around GDPR consent on the web. The author of this post found that only 48% of people clicked on the banner and, as the title states, only 9% of those gave permission to be tracked.


Oak National Academy: lockdown saviour or DfE tool?

There are some who are alarmed by the nature of the creature that the DfE has helped bring to life, seeing Oak as an enterprise established by a narrow strata of figures from DfE-favoured multi-academy trusts; and as a potential vehicle for the department to promote a “traditionalist” agenda in teaching, or even create the subject matter of a government-approved curriculum.

John Morgan (TES)

I welcome this critical article in the TES of Oak National Academy. My two children find the lessons 'cringey', not every subject is covered, and the more you look into it, the more it seems like a front for a pedagogical coup.


The More Senior Your Job Title, the More You Need to Keep a Journal

Journal entries should provide not only a record of what happened but how we reacted emotionally; writing it down brings a certain clarity that puts things in perspective. In other cases, it’s a form of mental rehearsal to prepare for particularly sensitive issues where there’s no one to talk with but yourself. Journals can also be the best way to think through big-bet decisions and test one’s logic.

Dan Ciampa (Harvard Business Review

When I turned 18, I decided to keep a diary of my adult life. After about a decade, that had become a sporadic record of times when things weren't going so well. Now, 21 years later, I merely keep my #HashtagADay journal up-to-date.

But writing things down is really useful, as is having someone to talk to with whom you don't have an emotion-based relationship. After nine sessions of CBT, I wish I'd had someone like my therapist to talk to at a much younger age. Not because I'm 'broken' but because I'm human.


Rome burning

Top 10 books about tumultuous times

There’s nothing like a crisis of survival to show people’s true natures. Though I’ve written a good deal about tumultuous times, both fiction (English Passengers) and non-fiction (Rome: a History in Seven Sackings), I can’t say I’m too interested in the tumult itself. I’m more interested in the decisions people make during such crises – how they ride the wave.

Matthew Kneale (THe GUardian)

I don't think I'd heard of any of these books before reading this article! That being said, I've just joined Verso's new Book Club so my backlog just got a lot longer...


Full Employment

Keynes once proposed that we could jump-start an economy by paying half the unemployed people to dig holes and the other half to fill them in.

No one’s really tried that experiment, but we did just spend 150 years subsidizing our ancestors to dig hydrocarbons out of the ground. Now we’ll spend 200-300 years subsidizing our descendants to put them back in there.

Cory Doctorow (Locus Online)

I've quoted the end of this fantastic article, but you should read the whole thing. Doctorow, in his own inimitable way, absolutely eviscerates the prediction that a 'General Artificial Intelligence' will destroy most jobs.


Header image by Patrick Hendry

Saturday shakings

Whew, so many useful bookmarks to re-read for this week’s roundup! It took me a while, so let’s get on with it…


Cartoon picture of someone working from home

What is the future of distributed work?

To Bharat Mediratta, chief technology officer at Dropbox, the quarantine experience has highlighted a huge gap in the market. “What we have right now is a bunch of different productivity and collaboration tools that are stitched together. So I will do my product design in Figma, and then I will submit the code change on GitHub, I will push the product out live on AWS, and then I will communicate with my team using Gmail and Slack and Zoom,” he says. “We have all that technology now, but we don't yet have the ‘digital knowledge worker operating system’ to bring it all together.”

WIRED

OK, so this is a sponsored post by Dropbox on the WIRED website, but what it highlights is interesting. For example, Monday.com (which our co-op uses) rebranded itself a few months ago as a 'Work OS'. There's definitely a lot of money to be made for whoever manages to build an integrated solution, although I think we're a long way off something which is flexible enough for every use case.


The Definition of Success Is Autonomy

Today, I don’t define success the way that I did when I was younger. I don’t measure it in copies sold or dollars earned. I measure it in what my days look like and the quality of my creative expression: Do I have time to write? Can I say what I think? Do I direct my schedule or does my schedule direct me? Is my life enjoyable or is it a chore?

Ryan Holiday

Tim Ferriss has this question he asks podcast guests: "If you could have a gigantic billboard anywhere with anything on it what would it say and why?" I feel like the title of this blog post is one of the answers I would give to that question.


Do The Work

We are a small group of volunteers who met as members of the Higher Ed Learning Collective. We were inspired by the initial demand, and the idea of self-study, interracial groups. The initial decision to form this initiative is based on the myriad calls from people of color for white-bodied people to do internal work. To do the work, we are developing a space for all individuals to read, share, discuss, and interrogate perspectives on race, racism, anti-racism, identity in an educational setting. To ensure that the fight continues for justice, we need to participate in our own ongoing reflection of self and biases. We need to examine ourselves, ask questions, and learn to examine our own perspectives. We need to get uncomfortable in asking ourselves tough questions, with an understanding that this is a lifelong, ongoing process of learning.

Ian O'Byrne

This is a fantastic resource for people who, like me, are going on a learning journey at the moment. I've found the podcast Seeing White by Scene on Radio particularly enlightening, and at times mind-blowing. Also, the Netflix documentary 13th is excellent, and available on YouTube.


Welding a motherboard

How to Make Your Tech Last Longer

If we put a small amount of time into caring for our gadgets, they can last indefinitely. We’d also be doing the world a favor. By elongating the life of our gadgets, we put more use into the energy, materials and human labor invested in creating the product.

Brian X. Chen (The new York times)

This is a pretty surface-level article that basically suggests people take their smartphone to a repair shop instead of buying a new one. What it doesn't mention is that aftermarket operating systems such as the Android-based LineageOS can extend the lifetime of smartphones by providing security updates long beyond those provided by vendors.


Law enforcement arrests hundreds after compromising encrypted chat system

EncroChat sold customized Android handsets with GPS, camera, and microphone functionality removed. They were loaded with encrypted messaging apps as well as a secure secondary operating system (in addition to Android). The phones also came with a self-destruct feature that wiped the device if you entered a PIN.

The service had customers in 140 countries. While it was billed as a legitimate platform, anonymous sources told Motherboard that it was widely used among criminal groups, including drug trafficking organizations, cartels, and gangs, as well as hitmen and assassins.

EncroChat didn’t become aware that its devices had been breached until May after some users noticed that the wipe function wasn’t working. After trying and failing to restore the features and monitor the malware, EncroChat cut its SIM service and shut down the network, advising customers to dispose of their devices.

Monica Chin (The Verge)

It goes without saying that I don't want assassins, drug traffickers, and mafia types to be successful in life. However, I'm always a little concerned when there are attacks on encryption, as they're compromising systems also potentially used by protesters, activists, and those who oppose the status quo.


Uncovered: 1,000 phrases that incorrectly trigger Alexa, Siri, and Google Assistant

The findings demonstrate how common it is for dialog in TV shows and other sources to produce false triggers that cause the devices to turn on, sometimes sending nearby sounds to Amazon, Apple, Google, or other manufacturers. In all, researchers uncovered more than 1,000 word sequences—including those from Game of Thrones, Modern Family, House of Cards, and news broadcasts—that incorrectly trigger the devices.

“The devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans,” one of the researchers, Dorothea Kolossa, said. “Therefore, they are more likely to start up once too often rather than not at all.”

Dan Goodin (Ars Technica)

As anyone with voice assistant-enabled devices in their home will testify, the number of times they accidentally spin up, or misunderstand what you're saying can be amusing. But we can and should be wary of what's being listened to, and why.


The Five Levels of Remote Work

The Five Levels of Remote Work — and why you’re probably at Level 2

Effective written communication becomes critical the more companies embrace remote work. With an aversion to ‘jumping on calls’ at a whim, and a preference for asynchronous communication... [most] communications [are] text-based, and so articulate and timely articulation becomes key.

Steve Glaveski (The Startup)

This is from March and pretty clickbait-y, but everyone wants to know how they can improve - especially if didn't work remotely before the pandemic. My experience is that actually most people are at Level 3 and, of course, I'd say that I and my co-op colleagues are at Level 5 given our experience...


Why Birds Can Fly Over Mount Everest

All mammals, including us, breathe in through the same opening that we breathe out. Can you imagine if our digestive system worked the same way? What if the food we put in our mouths, after digestion, came out the same way? It doesn’t bear thinking about! Luckily, for digestion, we have a separate in and out. And that’s what the birds have with their lungs: an in point and an out point. They also have air sacs and hollow spaces in their bones. When they breathe in, half of the good air (with oxygen) goes into these hollow spaces, and the other half goes into their lungs through the rear entrance. When they breathe out, the good air that has been stored in the hollow places now also goes into their lungs through that rear entrance, and the bad air (carbon dioxide and water vapor) is pushed out the front exit. So it doesn’t matter whether birds are breathing in or out: Good air is always going in one direction through their lungs, pushing all the bad air out ahead of it.

Walter Murch (Nautilus)

Incredible. Birds are badass (and also basically dinosaurs).


Montaigne Fled the Plague, and Found Himself

In the many essays of his life he discovered the importance of the moderate life. In his final essay, “On Experience,” Montaigne reveals that “greatness of soul is not so much pressing upward and forward as knowing how to circumscribe and set oneself in order.” What he finds, quite simply, is the importance of the moderate life. We must then, he writes, “compose our character, not compose books.” There is nothing paradoxical about this because his literary essays helped him better essay his life. The lesson he takes from this trial might be relevant for our own trial: “Our great and glorious masterpiece is to live properly.”

Robert Zaresky (The New York Times)

Every week, Bryan Alexander replies to the weekly Thought Shrapnel newsletter. Last week, he sent this article to both me and Chris Lott (who produces the excellent Notabilia).

We had a bit of a chat, with us sharing our love of How to Live: A Life of Montaigne in One Question and Twenty Attempts at An Answer by Sarah Bakewell, and well as the useful tidbits it's possible glean from Stefan Zweig's short biography simply entitled Montaigne.


Header image by Nicolas Comte

Using WhatsApp is a (poor) choice that you make

People often ask me about my stance on Facebook products. They can understand that I don't use Facebook itself, but what about Instagram? And surely I use WhatsApp? Nope.

Given that I don't usually have a single place to point people who want to read about the problems with WhatsApp, I thought I'd create one.


WhatsApp is a messaging app that was acquired by Facebook for the eye-watering amount of $19 billion in 2014. Interestingly, a BuzzFeed News article from 2018 cites documents confidential documents from the time leading up to the acquisition that were acquired by the UK's Department for Culture, Media, and Sport. They show the threat WhatsApp posed to Facebook at the time.

US mobile messenger apps (iPhone) graph from August 2012 to March 2013
A document obtained by the DCMS as part of their investigations

As you can see from the above chart, Facebook executives were shown in 2013 that WhatsApp (8.6% reach) was growing rapidly and posed a huge threat to Facebook Messenger (13.7% reach).

So Facebook bought WhatsApp. But what did they buy? If, as we're led to believe, WhatsApp is 'end-to-end encrypted' then Facebook don't have access to the messages of users. So what's so valuable?


Brian Acton, one of the founders of WhatsApp (and a man who got very rich through its sale) has gone on record saying that he feels like he sold his users' privacy to Facebook.

Facebook, Acton says, had decided to pursue two ways of making money from WhatsApp. First, by showing targeted ads in WhatsApp’s new Status feature, which Acton felt broke a social compact with its users. “Targeted advertising is what makes me unhappy,” he says. His motto at WhatsApp had been “No ads, no games, no gimmicks”—a direct contrast with a parent company that derived 98% of its revenue from advertising. Another motto had been “Take the time to get it right,” a stark contrast to “Move fast and break things.”

Facebook also wanted to sell businesses tools to chat with WhatsApp users. Once businesses were on board, Facebook hoped to sell them analytics tools, too. The challenge was WhatsApp’s watertight end-to-end encryption, which stopped both WhatsApp and Facebook from reading messages. While Facebook didn’t plan to break the encryption, Acton says, its managers did question and “probe” ways to offer businesses analytical insights on WhatsApp users in an encrypted environment.

Parmy Olson (Forbes)

The other way Facebook wanted to make money was to sell tools to businesses allowing them to chat with WhatsApp users. These tools would also give "analytical insights" on how users interacted with WhatsApp.

Facebook was allowed to acquire WhatsApp (and Instagram) despite fears around monopolistic practices. This was because they made a promise not to combine data from various platforms. But, guess what happened next?

In 2014, Facebook bought WhatsApp for $19b, and promised users that it wouldn't harvest their data and mix it with the surveillance troves it got from Facebook and Instagram. It lied. Years later, Facebook mixes data from all of its properties, mining it for data that ultimately helps advertisers, political campaigns and fraudsters find prospects for whatever they're peddling. Today, Facebook is in the process of acquiring Giphy, and while Giphy currently doesn’t track users when they embed GIFs in messages, Facebook could start doing that anytime.

Cory Doctorow (EFF)

So Facebook is harvesting metadata from its various platforms, tracking people around the web (even if they don't have an account), and buying up data about offline activities.

All of this creates a profile. So yes, because of end-ot-end encryption, Facebook might not know the exact details of your messages. But they know that you've started messaging a particular user account around midnight every night. They know that you've started interacting with a bunch of stuff around anxiety. They know how the people you message most tend to vote.


Do I have to connect the dots here? This is a company that sells targeted adverts, the kind of adverts that can influence the outcome of elections. Of course, Facebook will never admit that its platforms are the problem, it's always the responsibility of the user to be 'vigilant'.

Man reading a newspaper
A WhatsApp advert aiming to 'fighting false information' (via The Guardian)

So you might think that you're just messaging your friend or colleague on a platform that 'everyone' uses. But your decision to go with the flow has consequences. It has implications for democracy. It has implications on creating a de facto monopoly for our digital information. And it has implications around the dissemination of false information.

The features that would later allow WhatsApp to become a conduit for conspiracy theory and political conflict were ones never integral to SMS, and have more in common with email: the creation of groups and the ability to forward messages. The ability to forward messages from one group to another – recently limited in response to Covid-19-related misinformation – makes for a potent informational weapon. Groups were initially limited in size to 100 people, but this was later increased to 256. That’s small enough to feel exclusive, but if 256 people forward a message on to another 256 people, 65,536 will have received it.

[...]

A communication medium that connects groups of up to 256 people, without any public visibility, operating via the phones in their pockets, is by its very nature, well-suited to supporting secrecy. Obviously not every group chat counts as a “conspiracy”. But it makes the question of how society coheres, who is associated with whom, into a matter of speculation – something that involves a trace of conspiracy theory. In that sense, WhatsApp is not just a channel for the circulation of conspiracy theories, but offers content for them as well. The medium is the message.

William Davies (The Guardian)

I cannot control the decisions others make, nor have I forced my opinions on my two children, who (despite my warnings) both use WhatsApp to message their friends. But, for me, the risk to myself and society of using WhatsApp is not one I'm happy with taking.

Just don't say I didn't warn you.


Header image by Rachit Tank

Saturday shoutings

The link I'm most enthusiastic about sharing this week is one to a free email-based course I've created with my co-op colleagues. It's entitled The 7 Habits of Highly Effective Virtual Meetings and part of a new series we're working on.

Skills for the New Normal

The other links are slightly fewer in number this week because time, it turns out, is finite.


Clean Language: David Grove Questioning Method

Developing Questions
"(And) what kind of X (is that X)?"
"(And) is there anything else about X?"
"(And) where is X? or (And) whereabouts is X?"
"(And) that's X like what?"
"(And) is there a relationship between X and Y?"
"(And) when X, what happens to Y?

Sequence and Source Questions
"(And) then what happens? or (And) what happens next?"
"(And) what happens just before X?"
"(And) where could X come from?"

Intention Questions
"(And) what would X like to have happen?"
"(And) what needs to happen for X?"
"(And) can X (happen)?"

The first two questions: "What kind of X (is that X)?" and "Is there anything else about X?" are the most commonly used.

As a general guide, these two questions account for around 50% of the questions asked in a typical Clean Language session.

BusinessBalls

I had a great chat with Kristian Still this week, for the first time in about a decade. Kristian was part of EdTechRoundUp back in the day, and early EduTwitter. Among the many things we discussed is his enthusiasm for "clean questioning" which I'm going to investigate further.


How ‘Sustainable’ Web Design Can Help Fight Climate Change

Even our throwaway habits can add up to a mountain of carbon. Consider all the little social emails we shoot back and forth—“thanks,” “got it,” “lol.” The UK energy firm Ovo examined email usage and—using data from Lancaster University professor Mike Berners-Lee, who analyzes carbon footprints—they found that if every adult in the UK just sent one less “thank you” email per day, it would cut 16 tons of carbon each year, equal to 22 round-trip flights between New York and London. They also found that 49 percent of us often send thank-you emails to people “within talking distance.” We can lower our carbon output if we'd just take the headphones off for a minute and stop behaving like a bunch of morlocks.

Clive Thompson (WIRED)

Small differences all add up. Our design choices and the decisions we make about technology all have a part to play in fighting climate change.


Apple, Big Sur, and the rise of Neumorphism

When you boil it down, neumorphism is a focus on how light moves in three-dimensional space. Its predecessor, skeumorphism, created realism in digital interfaces by simulating textures on surfaces like felt on a poker table or the brushed metal of a tape recorder. An ancillary — though under-developed — aspect of this design style was lighting that interacted realistically with the materials that were being represented; this is why shadows and darkness were so prevalent in those early interfaces.

Jack Koloskus (Input)

The dominant design language over the last five years, without doubt, has been Google's Material Design. Will a neumorphic approach take over? It's certainly an interesting approach.


Snowden: Tech Workers Are Complicit in How Their Companies Hurt Society

He called on those in the tech industry to look at the bigger picture regarding their work and its implications beyond simply a project—and to think deeply and take a stronger stand with regards to who their labor actually serves.

“It’s not enough to read, it’s not enough to believe in something, it’s not enough to write something, you have to eventually stand for something if you want things to change,” he said.

Kevin Truong (Motherboard)

The tech industry is an interesting one as it's a relatively new and immature one, at least in its current guise. As a result, the ethics, and the checks and balances aren't quite there yet.

To my mind, things like unions and professional associations show maturity and the kind of coming together that don't put moral decisions on the shoulders of individuals, but rather on the whole sector.


Brexit

Tea, Biscuits, and Empire: The Long Con of Britishness

[T]here is a narrative chasm between the twee and borderless dreamscape of fantasy Britain and actual, material Britain, where rents are rising and racists are running brave. The chasm is wide, and a lot of people are falling into it. The omnishambles of British politics is what happens when you get scared and mean and retreat into the fairytales you tell about yourself. When you can no longer live within your own contradictions. When you want to hold on to the belief that Britain is the land of Jane Austen and John Lennon and Sir Winston Churchill, the war hero who has been repeatedly voted the greatest Englishman of all time. When you want to forget that Britain is also the land of Cecil Rhodes and Oswald Mosley and Sir Winston Churchill, the brutal colonial administrator who sanctioned the building of the first concentration camps and condemned millions of Indians to death by starvation. These are not contradictions, even though the drive to separate them is cracking the country apart. If you love your country and don’t own its difficulties and its violence, you don’t actually love your country. You’re just catcalling it as it goes by.

Laurie Penny (Longreads)

I always find looking at my country through the lens of foreigners cringe-inducing. I suppose it's a narrative produced for tourists but, sadly, we seem to have believed our own rhetoric, and look where it's gotten us...


How Big Tech Monopolies Distort Our Public Discourse

The idea that Big Tech can mold discourse through bypassing our critical faculties by spying on and analyzing us is both self-serving (inasmuch as it helps Big Tech sell ads and influence services) and implausible, and should be viewed with extreme skepticism

But you don't have to accept extraordinary claims to find ways in which Big Tech is distorting and degrading our public discourse. The scale of Big Tech makes it opaque and error-prone, even as it makes the job of maintaining a civil and productive space for discussion and debate impossible.

Cory Doctorow (EFF)

A tour de force from Doctorow, who eviscerates the companies that make up 'Big Tech' and the role they have in hollowing-out civic society.


Header image by Andrea Piacquadio

The highest ambition of the integrated spectacle is to turn secret agents into revolutionaries and revolutionaries into secret agents

This article is about, and quotes heavily from Guy Debord's Comments on the Society of the Spectacle published twenty years after his 1967 Society of the Spectacle. I wanted to share all of the bits that I highlighted, as I think it speaks directly into our currently times, so buckle-up.


Debord never gives a single definition of 'the spectacle' but rather alludes to it in such a way that the reader is left in no doubt as to what it is. Here's one such section:

Rather than talk of the spectacle, people often prefer to use to the term 'media'. And by this they mean to describe a mere instrument, a kind of public service which with impartial 'professionalism' would facilitate the new wealth of mass communication through mass media - a form of communication which has at last attained a unilateral purity, whereby decisions already taken are presented for passive admiration. For what is communicated are orders; and with perfect harmony, those who give them are also those who tell us what they think of them.

p.6

There are three kinds of spectacle, the 'concentrated' spectacle and 'diffuse' spectacle that Debord discusses in his earlier work, and then the 'integrated' spectacle that he introduces in Comments. Briefly, the concentrated spectacle can be seen in totalitarian regimes, whereas the diffuse spectacle is in evidence in democracies such as the United States.

The integrated spectacle shows itself to be simultaneously concentrated and diffuse...

For the final sense of the integrated spectacle is this - that it has integrated itself into reality to the same extent as it was describing it. As a result, this reality no longer confronts the integrated spectacle as something alien. When the spectacle was concentrated, the grater part of surrounding society escaped it; when diffuse, a small part; today, no part.

p.9

One way of thinking about this in 2020 is the extent to which we carry around the media (a.k.a. the integrated spectacle) in our pockets. It permeates and mediates our reality, and we conform ourselves to its whims and ideas - for example, on social media platforms for likes and follows. We spend our time pointing out the falsity of media reports contrary to our beliefs, always within the construct of the spectacle.

Often enough society's bosses declare themselves ill-served by their media employees: more often they blame the spectators for the common, almost bestial manner in which they indulge in the media's delights. A virtually infinite number of supposed differences within the media thus serve to screen what is in fact the result of a spectacular convergence, pursued with remarkable tenacity.

p.7

Experts are dead in the traditional sense, all that remain are media professionals who help explain the spectacle and serve to perpetuate its existence.

With the destruction of history, contemporary events themselves retreat into a remote and fabulous realm of unverifiable stories, uncheckable statistics, unlikely explanations and untenable reasoning. For every imbecility presented by the spectacle, there are only the media's professionals to give an answer, with a few respectful rectifications or remonstrations.

p.16

What can one do about this? Choose to live outside the grip of the spectacle? Debord says this is practically impossible, as to do so is to be a pariah.

An anti-spectacular notoriety has become something extremely rare. I myself am one of the last people to retain one, having never had any other. But it has also become extraordinarily suspect. Society has officially declared itself to be spectacular. To be known outside spectacular relations is already to be known as an enemy of society.

p.18

This is part of the problem that people are up against when trying to do things that are counter-cultural. The counter-culture is part of the spectacle, and has been commodified; packaged up to be sold at low prices to everyone via t-shirts, mugs, and other trinkets.

The spectacle requires a fleetness of foot imparted to it by everyone's acquiescence to maintain velocity. This is achieved partly through news cycles that produce outrage but then move on quickly to the next target.

When the spectacle stops talking about something for three days, it is as if it did not exist. For it has then gone on to talk about something else, and it is that which henceforth, in short, exists. The practical consequences, as we see, are enormous.

p.20

The spectacular machinery of our age is therefore ill-suited for the kind of messaging required during, say, a global pandemic. The spectacle feeds on our emotions, on our base fears, on our need for safety. It 'others' people, ensuring that there is always a them vs us.

Such a perfect democracy constructs its own inconceivable foe, terrorism. Its wish is to be judged by its enemies rather than by its results. The story of terrorism is written by the state and it is therefore highly instructive. The spectators must certainly never know everything about terrorism, but they must always know enough to convince them that, compared with terrorism, everything else must be acceptable, or in any case more rational and democratic.

p.24

This explains why COVID-19 cannot possibly, so the conspiracy theorists say, come from bats but instead must surely be the 'weaponised' product of an enemy laboratory. It's the reason why two and two are put together to make five, with 5G masts and George Soros and Bill Gates and a 'plandemic' serving to fill the role of terrorist.

Making connections between seemingly disparate people, technologies, and ideas is easier in a world where the spectacle provides a never-ending supply of memetic imagery, designed to resonate on an emotional leve.

At the technological level, when images chosen and constructed by someone else have everywhere become the individual's principle connection to the world he formerly observed for himself, it has certainly not been forgotten that these images can tolerate anything and everything; because within the same image all things can be juxtaposed without contradiction. The flow of images carries everything before it, and it is similarly someone else who controls at will this simplified summary of the sensible world; who decides where the flow will lead as well as the rhythm of what should be shown, like some perpetual, arbitrary surprise, leaving no time for reflection, and entirely independent of what the spectator might understand or think of it.

p.27-28

Today, algorithms used by social media platforms dictate what we as users see and do not see. Baby photos precede photos of protesters which are followed by an advert for a new soft drink. No wonder we're not sure what to think.

The only response is submission to the spectacle, of the reduction of the self to a pawn in a game played by someone, or something, else.

Paradoxically, permanent self-denial is the price the individual pays for the tiniest bit of social status. Such an existence demands a fluid fidelity, a succession of continually disappointing commitments to false products. It is a matter of running hard to keep up with the inflation of devalued signs of life.

p.32

All of this is depressing enough without adding in deliberate attempts to reduce our agency by means of feeding false information with the aim to leave us confused, apathetic, and less inclined to vote in democratic elections. After all, what's the point when there is no coherent narrative?

Unlike the straightforward lie, disinformation must inevitably contain a degree of truth but one deliberately manipulated by an artful enemy. That is what makes it so attractive to the defenders of the dominant society. The power which speaks of disinformation does not believe itself to be absolutely faultless, but knows that it can attribute to any precise criticism the excessive insignificance which characterises disinformation; with the result that it will never have to admit to any particular fault.

p.45

So we get false flag campaigns, deflection, no-apology apologies, until things, as they always do with the spectacle, move on. As Debord points out, we live in a world "without room for verification" (p.48), so we might as well share that headline that confirms our existing beliefs by retweeting (without reading) as it passes us by.

In the 19th century, it made sense for Ludwig Feuerbach, a thinker who greatly influenced Karl Marx, to point to an emerging preference for the imaginary over the real.

Today, however, the tendency to replace the real with the artificial is ubiquitous. In this regard, it is fortuitous that traffic pollution has necessitated the replacement of the Marly Horses in place de la Concorde, or the Roman statues in the doorway of Saint-Trophime in Arles, by plastic replicas. Everything will be more beautiful than before, for the tourists' cameras.

p.51

Here is the problem for the person, or group of people, wishing to smash the spectacle, to dismantle it, to take it apart. It must be done in one go, rather than piecemeal. Otherwise, the spectacle has too much capacity to self-repair.

In a certain sense the coherence of spectacular society proves revolutionaries right, since it is evident that one cannot reform the most trifling detail without taking the whole thing apart. But at the same time this coherence has eliminated every organised revolutionary tendency by eliminating those social terrains where it had more or less effectively been able to find expression: from trade unions to newspapers, towns to books.

p.80

So there can be no conclusion, only awareness. We live in completely different times to our forebears. I'll leave the last word to Debord.

Old prejudices everywhere belied, precautions now useless, and even the residues of scruples from an earlier age, still clog up the thinking of quite a number of rulers, preventing them from recognising something which practice demonstrates and proves every single day. Not only are the subjected led to believe that to all intents and purposes they are still living in a world which in fact has been eliminated, but the rulers themselves sometimes suffer from the absurd belief that in some respects they do too.

p.87-88

Header image by elCarito

Saturday scrapings

Every week, I go back through the links I've saved, pick out the best ones, and share them here. This week is perhaps even more eclectic than usual. Enjoy!


Marcus Henderson

Meet the Farmer Behind CHAZ's Vegetable Gardens

Marcus was the first to start gardening in the park, though he was quickly joined by friends and strangers. This isn’t the work of a casual amateur; Henderson has an Energy Resources Engineering degree from Stanford University, a Master’s degree in Sustainability in the Urban Environment, and years of experience working in sustainable agriculture. His Instagram shows him hard at work on various construction and gardening projects, and he’s done community development at organic farms around the world.

Matt Baume (The Stranger)

I love this short article about Marcus Henderson, the first person to start planting in Seattle's Capitol Hill Autonomous Zone.


The Rich Are 'Defunding' Our Democracy

“Apparently,” comments [journalist David] Sirota, “we’re expected to be horrified by proposals to reduce funding for the militarized police forces that are violently attacking peaceful protesters — but we’re supposed to obediently accept the defunding of the police forces responsible for protecting the population from the wealthy and powerful.”

Sam Pizzigati (Inequality.org)

A lot of people have been shocked by the calls to 'defund the police' on the back of the Black Lives Matter protests. The situation is undoubtedly worse in the US, but I particularly liked this explainer image, that I came across via Mastodon:

Teapot with label 'Defund the police' which has multiple spouts pouring into cups entitled 'Education', 'Universal healthcare', 'Youth services', 'Housing', and 'Other community investments'

Peasants' Revolt

Yet perhaps the most surprising feature of the revolt is that in-spite of the modern title, Peasants' Revolt didn't gain usage until the late nineteenth century, the people who animated the movement weren't peasants at all. They were in many respects the village elite. True, they weren't noble magnates, but they were constables, stewards and jurors. In short, people who were on the up and saw an opportunity to press their agenda.

Robert Winter

I love reading about things I used to teach, especially when they're written by interesting people about which I want to know more. This blog post is by Robert Winter, "philosopher and historian by training, Operations Director by pay cheque". I discovered is as part of the #100DaysToOffload challenge, largely happening on the Fediverse, and to which I'm contributing.


Red blood cells

Three people with inherited diseases successfully treated with CRISPR

Two people with beta thalassaemia and one with sickle cell disease no longer require blood transfusions, which are normally used to treat severe forms of these inherited diseases, after their bone marrow stem cells were gene-edited with CRISPR.

Michael Le Page (New Scientist)

CRISPR is a way of doing gene editing within organisms. sAs far as I'm aware, this is one of the first times it's been used to treat conditions in humans. I'm sure it won't be the last.


Choose Your Own Fake News

Choose Your Own Fake News is an interactive "choose your own adventure" game. Play the game as Flora, Jo or Aida from East Africa, and navigate the world of disinformation and misinformation through the choices you make. Scrutinize news and information about job opportunities, vaccines and upcoming elections to make the right choices!

This is the kind of thing that the Mozilla Foundation does particularly well: either producing in-house, or funding very specific web-based tools to teach people things. In this case, it's fake news. And it's really good.


Why are Google and Apple dictating how European democracies fight coronavirus?

The immediate goal for governments and tech companies is to strike the right balance between privacy and the effectiveness of an application to limit the spread of Covid-19. This requires continuous collaboration between the two with the private sector, learning from the experience of national health authorities and adjusting accordingly. Latvia, together with the rest of Europe, stands firm in defending privacy, and is committed to respecting both the individual’s right to privacy and health while applying its own solutions to combat Covid-19.

Ieva Ilves (The Guardian)

This is an article written by an an adviser to the president of Latvia on information and digital policy. They explain some of the nuance behind the centralised vs decentralised contact tracing app models which I hadn't really thought about.


Illustration of Lévy walks

Random Search Wired Into Animals May Help Them Hunt

Lévy walks are now seen as a movement pattern that a nervous system can produce in the absence of useful sensory or mnemonic information, when it is an animal’s most advantageous search strategy. Of course, many animals may never employ a Lévy walk: If a polar bear can smell a seal, or a cheetah can see a gazelle, the animals are unlikely to engage in a random search strategy. “We expect the adaptation for Lévy walks to have appeared only where they confer practical advantages,” Viswanathan said.

Liam Drew (QUanta Magazine)

If you've watched wildlife documentaries, you probably know about Lévy walks (or 'flights'). This longish article gives a fascinating insight into the origin of the theory and how it can be useful in protecting different species.


A plan to turn the atmosphere into one, enormous sensor

One of AtmoSense’s first goals will be to locate and study phenomena at or close to Earth’s surface—storms, earthquakes, volcanic eruptions, mining operations and “mountain waves”, which are winds associated with mountain ranges. The aim is to see if atmospheric sensing can outperform existing methods: seismographs for earthquakes, Doppler weather radar for storms and so on.

The Economist

This sounds potentially game-changing. I can see the positives, but I wonder what the negatives will be?


Paths of desire: lockdown has lent a new twist to the trails we leave behind

Desire paths aren’t anything new – the term has been traced back to the French philosopher Gaston Bachelard, who wrote of “lignes de désir” in his 1958 book The Poetics of Space. Nature author Robert Macfarlane has written more recently about the inherent poetry of the paths. In his 2012 book The Old Ways: A Journey on Foot, Macfarlane calls them “elective easements” and says: “Paths are human; they are traces of our relationships.” Desire paths have been created by enthusiastic dogs in back gardens, by superstitious humans avoiding scaffolding and by students seeking shortcuts to class. Yet while illicit trails may have marked the easier (ie shorter) route for centuries, the pandemic has turned them into physical markers of our distance. Desire paths are no longer about making life easier for ourselves, but about preserving life for everyone.

Amelia Tait (The Guardian)

I've used desire paths as a metaphor many times in presentations and workshops over the last decade. This is an article that specifically talks about how they've sprung up during the pandemic.


Header image by Hans Braxmeier

Everyone has a mob self and an individual self, in varying proportions

Digital mediation, decentralisation, and context collapse

Is social media 'real life'? A recent Op-Ed in The New York Times certainly things so:

An argument about Twitter — or any part of the internet — as “real life” is frequently an argument about what voices “matter” in our national conversation. Not just which arguments are in the bounds of acceptable public discourse, but also which ideas are considered as legitimate for mass adoption. It is a conversation about the politics of the possible. That conversation has many gatekeepers — politicians, the press, institutions of all kinds. And frequently they lack creativity.

Charlie Warzel (The New York Times)

I've certainly been a proponent over the years for the view that digital interactions are no less 'real' than analogue ones. Yes, you're reading a book when you do so on an e-reader. That's right, you're meeting someone when doing so over video conference. And correct, engaging in a Twitter thread counts as a conversation.

Now that everyone's interacting via digital devices during the pandemic, things that some parts of the population refused to count as 'normal' have at least been normalised. It's been great to see so much IRL mobilisation due to protests that started online, for example with the #BlackLivesMatter hashtag.


With this very welcome normalisation, however, I'm not sure there's a general understanding about how digital spaces mediate our interactions. Offline, our conversations are mediated by the context in which we find ourselves: we speak differently at home, on the street, and in the pub. Meanwhile, online, we experience context collapse as we take our smartphones everywhere.

We forget that we interact in algorithmically-curated environments that favour certain kinds of interactions over others. Sometimes these algorithms can be fairly blunt instruments, for example when 'Dominic Cummings' didn't trend on Twitter despite him being all over the news. Why? Because of anti-porn filters.

Other times, things are quite subtle. I've spoken on numerous occasions why I don't use Facebook products. Part of the reason for this is that I don't trust their privacy practices or algorithms. For example, a recent study showed that Instagram (which, of course, is owned by Facebook) actively encourages users to show some skin.

While Instagram claims that the newsfeed is organized according to what a given user “cares about most”, the company’s patent explains that it could actually be ranked according to what it thinks all users care about. Whether or not users see the pictures posted by the accounts they follow depends not only on their past behavior, but also on what Instagram believes is most engaging for other users of the platform.

Judith Duportail, Nicolas Kayser-Bril, Kira Schacht and Édouard Richard (Algorithm Watch)

I think I must have linked back to this post of mine from six years ago more than any other one I've written: Curate or Be Curated: Why Our Information Environment is Crucial to a Flourishing Democracy, Civil Society. To quote myself:

The problem with social networks as news platforms is that they are not neutral spaces. Perhaps the easiest way to get quickly to the nub of the issue is to ask how they are funded. The answer is clear and unequivocal: through advertising. The two biggest social networks, Twitter and Facebook (which also owns Instagram and WhatsApp), are effectively “services with shareholders.” Your interactions with other people, with media, and with adverts, are what provide shareholder value. Lest we forget, CEOs of publicly-listed companies have a legal obligation to provide shareholder value. In an advertising-fueled online world this means continually increasing the number of eyeballs looking at (and fingers clicking on) content. 

Doug Belshaw (Connected learning Alliance)

Herein lies the difficulty. We can't rely on platforms backed by venture capital as they end up incentivised to do the wrong kinds of things. Equally, no-one is going to want to use a platform provided by a government.

This is why really do still believe that decentralisation is the answer here. Local moderation by people you know and/or trust that can happen on an individual or instance level. Algorithmic curation for the benefit of users which can be turned on or off by the user. Scaling both vertically and horizontally.

At the moment it's not the tech that's holding people back from such decentralisation but rather two things. The first is the mental model of decentralisation. I think that's easy to overcome, as back in 2007 people didn't really 'get' Twitter, etc. The second one is much more difficult, and is around the dopamine hit you get from posting something on social media and becoming a minor celebrity. Although it's possible to replicate this in decentralised environments, I'm not sure we'd necessarily want to?


Slightly modified quotation-as-title by D.H. Lawrence. Header image by Prateek Katyal

Saturday soundings

Black Lives Matter. The money from this month's kind supporters of Thought Shrapnel has gone directly to the 70+ community bail funds, mutual aid funds, and racial justice organizers listed here.


IBM abandons 'biased' facial recognition tech

A 2019 study conducted by the Massachusetts Institute of Technology found that none of the facial recognition tools from Microsoft, Amazon and IBM were 100% accurate when it came to recognising men and women with dark skin.

And a study from the US National Institute of Standards and Technology suggested facial recognition algorithms were far less accurate at identifying African-American and Asian faces compared with Caucasian ones.

Amazon, whose Rekognition software is used by police departments in the US, is one of the biggest players in the field, but there are also a host of smaller players such as Facewatch, which operates in the UK. Clearview AI, which has been told to stop using images from Facebook, Twitter and YouTube, also sells its software to US police forces.

Maria Axente, AI ethics expert at consultancy firm PwC, said facial recognition had demonstrated "significant ethical risks, mainly in enhancing existing bias and discrimination".

BBC News

Like many newer technologies, facial recognition is already a battleground for people of colour. This is a welcome, if potential cynical move, by IBM who let's not forget literally provided technology to the Nazis.


How Wikipedia Became a Battleground for Racial Justice

If there is one reason to be optimistic about Wikipedia’s coverage of racial justice, it’s this: The project is by nature open-ended and, well, editable. The spike in volunteer Wikipedia contributions stemming from the George Floyd protests is certainly not neutral, at least to the extent that word means being passive in this moment. Still, Koerner cautioned that any long-term change of focus to knowledge equity was unlikely to be easy for the Wikipedia editing community. “I hope that instead of struggling against it they instead lean into their discomfort,” she said. “When we’re uncomfortable, change happens.”

Stephen Harrison (Slate)

This is a fascinating glimpse into Wikipedia and how the commitment to 'neutrality' affects coverage of different types of people and event feeds.


Deeds, not words

Recent events have revealed, again, that the systems we inhabit and use as educators are perfectly designed to get the results they get. The stated desire is there to change the systems we use. Let’s be able to look back to this point in two years and say that we have made a genuine difference.

Nick Dennis

Some great questions here from Nick, some of which are specific to education, whereas others are applicable everywhere.


Sign with hole cut out saying 'NO JUSTICE NO PEACE'

Audio Engineers Built a Shield to Deflect Police Sound Cannons

Since the protests began, demonstrators in multiple cities have reported spotting LRADs, or Long-Range Acoustic Devices, sonic weapons that blast sound waves at crowds over large distances and can cause permanent hearing loss. In response, two audio engineers from New York City have designed and built a shield which they say can block and even partially reflect these harmful sonic blasts back at the police.

Janus Rose (Vice)

For those not familiar with the increasing militarisation of police in the US, this is an interesting read.


CMA to look into Facebook's purchase of gif search engine

The Competition and Markets Authority (CMA) is inviting comments about Facebook’s purchase of a company that currently provides gif search across many of the social network’s competitors, including Twitter and the messaging service Signal.

[...]

[F]or Facebook, the more compelling reason for the purchase may be the data that Giphy has about communication across the web. Since many services that integrate with the platform not only use it to find gifs, but also leave the original clip hosted on Giphy’s servers, the company receives information such as when a message is sent and received, the IP address of both parties, and details about the platforms they are using.

Alex Hern (The Guardian)

In my 2012 TEDx Talk I discussed the memetic power of gifs. Others might find this news surprising, but I don't think I would have been surprised even back then that it would be such a hot topic in 2020.

Also by the Hern this week is an article on Twitter's experiments around getting people to actually read things before they tweet/retweet them. What times we live in.


Human cycles: History as science

To Peter Turchin, who studies population dynamics at the University of Connecticut in Storrs, the appearance of three peaks of political instability at roughly 50-year intervals is not a coincidence. For the past 15 years, Turchin has been taking the mathematical techniques that once allowed him to track predator–prey cycles in forest ecosystems, and applying them to human history. He has analysed historical records on economic activity, demographic trends and outbursts of violence in the United States, and has come to the conclusion that a new wave of internal strife is already on its way1. The peak should occur in about 2020, he says, and will probably be at least as high as the one in around 1970. “I hope it won't be as bad as 1870,” he adds.

Laura Spinney (Nature)

I'm not sure about this at all, because if you go looking for examples of something to fit your theory, you'll find it. Especially when your theory is as generic as this one. It seems like a kind of reverse fortune-telling?


Universal Basic Everything

Much of our economies in the west have been built on the idea of unique ideas, or inventions, which are then protected and monetised. It’s a centuries old way of looking at ideas, but today we also recognise that this method of creating and growing markets around IP protected products has created an unsustainable use of the world’s natural resources and generated too much carbon emission and waste.

Open source and creative commons moves us significantly in the right direction. From open sharing of ideas we can start to think of ideas, services, systems, products and activities which might be essential or basic for sustaining life within the ecological ceiling, whilst also re-inforcing social foundations.

TessyBritton

I'm proud to be part of a co-op that focuses on openness of all forms. This article is a great introduction to anyone who wants a new way of looking at our post-COVID future.


World faces worst food crisis for at least 50 years, UN warns

Lockdowns are slowing harvests, while millions of seasonal labourers are unable to work. Food waste has reached damaging levels, with farmers forced to dump perishable produce as the result of supply chain problems, and in the meat industry plants have been forced to close in some countries.

Even before the lockdowns, the global food system was failing in many areas, according to the UN. The report pointed to conflict, natural disasters, the climate crisis, and the arrival of pests and plant and animal plagues as existing problems. East Africa, for instance, is facing the worst swarms of locusts for decades, while heavy rain is hampering relief efforts.

The additional impact of the coronavirus crisis and lockdowns, and the resulting recession, would compound the damage and tip millions into dire hunger, experts warned.

Fiona Harvey (The Guardian)

The knock-on effects of COVID-19 are going to be with us for a long time yet. And these second-order effects will themselves have effects which, with climate change also being in the mix, could lead to mass migrations and conflict by 2025.


Mice on Acid

What exactly a mouse sees when she’s tripping on DOI—whether the plexiglass walls of her cage begin to melt, or whether the wood chips begin to crawl around like caterpillars—is tied up in the private mysteries of what it’s like to be a mouse. We can’t ask her directly, and, even if we did, her answer probably wouldn’t be of much help.

Cody Kommers (Nautilus)

The bit about 'ego disillusion' in this article, which is ostensibly about how to get legal hallucinogens to market, is really interesting.


Header image by Dmitry Demidov

Saturday shruggings

I've got a proper Elgato green screen in my home office which I started using in earnest for virtual backgrounds this week. I'm quite fond of some of the Star Wars examples, but check out Disney, Studio Ghibli, The Simpsons, or even the curated collection on Unsplash!

It's been a crazy-busy week and I've worked a lot. Still, these are the things that caught my eye...


The future

It's always the wrong time to do anything

When this is over, and it will someday be over in one form or another, there’ll be a plethora of articles on all the “clever” people who saw OPPORTUNITIES and took advantage of them.
These articles are going to pretend some of those people were able to Mentok the Mindtaker their way through a global pandemic right to the sweet, profitable truth at its centre. And it will be so much bullshit, because they didn’t know how it will turn out. None of us do. They just have enough resources that not knowing didn’t matter.

[...]

Could you have done more? Yes. More isn’t the same as best. Whatever you did and however it was mitigated, constrained by your thoughts or desire or ambition or resources, was what was available for you to do. That’s how time works. We do what we do when we do it, and then, and here’s the best part, here’s the part that takes all those clever people mentioned earlier and just shoots them out into fucking space, then we can decide the next time whether we want to do more.

Thom Wong

So good. Read the whole thing. At it's heart it has the teachings of Epictetus.


Ten reasons why immunity passports are a bad idea

Societal stratification. Labelling people on the basis of their COVID-19 status would create a new measure by which to divide the ‘haves’ and the ‘have-nots’ — the immunoprivileged and the immunodeprived. Such labelling is particularly concerning in the absence of a free, universally available vaccine. If a vaccine becomes available, then people could choose to opt in and gain immune certification. Without one, stratification would depend on luck, money and personal circumstances. Restricting work, concerts, museums, religious services, restaurants, political polling sites and even health-care centres to COVID-19 survivors would harm and disenfranchise a majority of the population.

Natalie Kofler (Nature)

The NHS app has facial recognition, which paves the way for immunity passports. Not that anyone in their right mind would install it. This article outlines all of the reasons why such passports are a terrible idea, I've just quoted one of them here.


Dancing with tools

If you get good at a type of technology, you’ll find yourself using it often. On the other hand, if you decide that you’re somehow untalented at it (which is nonsense) or don’t take the time, then you’ll have sacrificed leverage and confidence that were offered to you.

Seth Godin

I tell my kids every single day that everyone they think is good at something has practised and practised and practised. It's particularly true when it comes to tech, yet the barriers never been so low.


Black triangles

Free as in Smash the Surveillance State: Alison Macrina on Library Freedom Project and Tor Browser

[R]ight now, seven of the top ten companies by market capitalization are tech companies. Seven out of ten are using data that they take from us, without our consent, to create their products. That is part of our labor power: those products are made with our emotional labor, our mental labor. Privacy is a way to reclaim our labor power. I want people to think about those relationships.

And, yeah, I also want people to not get their identities stolen. All of the more concrete problems are still important to me, especially when you think about who is subject to them—it's poor people and elderly people and people who don't have power. But with all of this work, I'm really trying to force a conversation about who controls the internet and what that means for our lives.

Alison Macrina (Logic Magazine)

Those people who don't think they need to know surveillance self-defence don't know what's coming next. Privacy is power.


Five years fighting for better tech for everyone

[R]ight now, seven of the top ten companies by market capitalization are tech companies. Seven out of ten are using data that they take from us, without our consent, to create their products. That is part of our labor power: those products are made with our emotional labor, our mental labor. Privacy is a way to reclaim our labor power. I want people to think about those relationships.

And, yeah, I also want people to not get their identities stolen. All of the more concrete problems are still important to me, especially when you think about who is subject to them—it's poor people and elderly people and people who don't have power. But with all of this work, I'm really trying to force a conversation about who controls the internet and what that means for our lives.

Alison Macrina (Logic Magazine)

If you don't think you need to protect your privacy, if you think that knowing how to use Tor relays is for geeks and hackers, then you are wrong. Privacy is about power, as this interview clearly and starkly points out.


How to read RSS in 2020

Another big benefit of RSS is that you curate your own feeds. You get to choose what you subscribe to in your feed reader, and the order in which the posts show up. You might prefer to read the oldest posts first, or the newest. You might group your feeds by topic or another priority. You are not subjected to the “algorithmic feed” of Facebook, Twitter, Instagram, YouTube, where they choose the order for you. You won’t miss your friends’ posts because the algorithm decided to suppress them, and you are not forced to endure ads disguised as content (unless a feed you subscribe to includes ads inside their posts).

Laura Kalbag

I pay Disroot for access to a Nextcloud instance where I do my RSS reading. Annoyingly, a couple of weeks ago they did an upgrade and the RSS module isn't compatible. So now I'm on the lookout for an alternative.

This article by Laura Kalbag is a good primer on what RSS is and how to use it. After all, curating your own information environment is important to our democratic processes.


Telepresence

Software will eat software in a remote-first world

We are coming to a point where software is developing so fast and the abstractions getting better that soon we will have more software written by a smaller number of people. In other words, just like software made legions of people working in other industries obsolete, it will soon make its creators less valuable too. In short, software will eat software. Or maybe, software will eat software people? I’m still working on it…

Can Duruk (Margins)

I don't actually agree with this line of reasoning at all, and find it quite US-centric, actually. Worth reading the whole thing, though, and seeing if you agree. I've found that it's easier to do collective action remotely, as it's easier to have quick backchannel conversations with colleagues.


Zuckerberg dismisses fact-checking after bragging about fact-checking

Zuckerberg has been reasonably consistent in making sure to leave large carve-outs in site policy for politicians, including the president. Last year, Facebook made clear that its community standards—including hate speech and abuse rules as well as fact-checking policies—do not apply to politicians or other newsworthy figures. The company has also said many times that political content and advertising does not need to be truthful, instead putting the onus on users to avoid lies or to recognize every time they are being lied to.

Kate Cox (Ars Technica)

Mark Zuckerberg is a menace to society, and I still refuse to use any of his companies products. Well, except Oculus, but I don't have to use a Facebook login for that.

An internal report four years ago found that 64% of all people joining extremist groups were down to Facebook's recommendation tools. Sixty. Four. Percent.

And they did nothing.


Joe Hart: 'All I want is to be a big part of a club. That's all that burns through me'

A simplistic assessment of Hart’s career would suggest it splits into two halves – before Pep and after Pep. Hart was told clearly by Guardiola in July 2016 that he had no future at City. Guardiola cannot be accused of being wrong for, in Ederson, he now has an outstanding goalkeeper who is highly accomplished with his feet. But Hart is intelligent and interested in exploring the blurred boundaries of football. Some of his greatest games – including the 2015 Champions League night at the Camp Nou when Lionel Messi described him as a “phenomenon” – followed soon after adversity. He is also keen to explain that, despite his predicament, his desire has intensified.

Donald McRae (The Guardian)

Oh man, I feel so sorry for Joe Hart. Yes, he's a wealthy sportsman but I think we can all empathise with a ruthless manager coming in and destroying his confidence.

It's a great article which is testament not only to the resilience of the man, but also the journalist writing about him.

I've linked to the original article in The Guardian but it's behind a (free) registration wall. Also available here.


Header image? The Shrug Emoji!

Saturday signalings

I've been head-down doing lots of work this week, and then it's been Bank Holiday weekend, so my reading has been pretty much whatever my social media feeds have thrown up!

There's broadly three sections here, though: stuff about the way we think, about technology, and about ways of working. Enjoy!


[www.youtube.com/watch](https://www.youtube.com/watch?v=fD58Bt2gj78#action=share)

How Clocks Changed Humanity Forever, Making Us Masters and Slaves of Time

The article with the above embedded video is from five years ago, but someone shared it on my Twitter timeline and it reminded me of something. When I taught my History students about the Industrial Revolution it blew their minds that different parts of the country could be, effectively, on different 'timezones' until the dawn of the railways.

It just goes to show how true it is that first we shape our tools, and then they shape us.


'Allostatic Load' is the Psychological Reason for Our Pandemic Brain Fog

“Uncertainty is one of the biggest elements that contributes to our experience of stress,” said Lynn Bufka, the senior director of Practice, Research, and Policy at the American Psychological Association. “Part of what we try to do to function in our society is to have some structure, some predictability. When we have those kinds of things, life feels more manageable, because you don’t have to put the energy into figuring those things out.”

Emily Baron Cadloff (VICE)

A short but useful article on why despite having grand plans, it's difficult to get anything done in our current situation. We can't even plan holidays at the moment.


Most of the Mind Can’t Tell Fact from Fiction

The industrialized world is so full of human faces, like in ads, that we forget that it’s just ink, or pixels on a computer screen. Every time our ancestors saw something that looked like a human face, it probably was one. As a result, we didn’t evolve to distinguish reality from representation. The same perceptual machinery interprets both.

Jim Davies (Nautilus)

A useful reminder that our brain contains several systems, some of which are paleolithic.


Wright Flier and Bell Rocket Belt

Not even wrong: ways to predict tech

The Wright Flier could only go 200 meters, and the Rocket Belt could only fly for 21 seconds. But the Flier was a breakthrough of principle. There was no reason why it couldn't get much better, very quickly, and Blériot flew across the English Channel just six years later. There was a very clear and obvious path to make it better. Conversely, the Rocket Belt flew for 21 seconds because it used almost a litre of fuel per second - to fly like this for half a hour you’d need almost two tonnes of fuel, and you can’t carry that on your back. There was no roadmap to make it better without changing the laws of physics. We don’t just know that now - we knew it in 1962.

Benedict Evans

A useful post about figuring out whether something will happen or be successful. The question is "what would have to change?"


Grandmother ordered to delete Facebook photos under GDPR

The case went to court after the woman refused to delete photographs of her grandchildren which she had posted on social media. The mother of the children had asked several times for the pictures to be deleted.

The GDPR does not apply to the "purely personal" or "household" processing of data. However, that exemption did not apply because posting photographs on social media made them available to a wider audience, the ruling said.

"With Facebook, it cannot be ruled out that placed photos may be distributed and may end up in the hands of third parties," it said.

The woman must remove the photos or pay a fine of €50 (£45) for every day that she fails to comply with the order, up to a maximum fine of €1,000.

BBC News

I think this is entirely reasonable, and I'm hoping we'll see more of this until people stop thinking they can sharing the personally identifiable information of others whenever and however they like.


Developing new digital skills – is training always the answer?

Think ESKiMO:

- Environment (E) – are the reasons its not happening outside of the control of the people you identified in Step 1? Do they have the resources, the tools, the funding? Do their normal objectives mean that they have to prioritise other things? Does the prevailing organisational culture work against achieving the goals?

- Skills (S) – Are they aware of the tasks they need to do and enabled to do them?

- Knowledge (K) – is the knowledge they need available to them? It could either be information they have to carry around in their heads, or just be available in a place they know about.

- Motivation (Mo) – Do they have the will to carry it out?

The last three (S,K, Mo) work a little bit like the fire triangle from that online fire safety training you probably had to do this year. All three need to be present for new practice to happen and to be sustainable.

Chris Thomson (Jisc)

In this post, Chris Thomson, who I used to work with at Jisc, challenges the notion that training is about getting people to do what you want. Instead, this ESKiMO approach asks why they're not already doing it.


xkcd: estimating time

Leave Scrum to Rugby, I Like Getting Stuff Done

Within Scrum, estimates have a primary purpose – to figure out how much work the team can accomplish in a given sprint. If I were to grant that Sprints were a good idea (which I obviously don’t believe) then the description of estimates in the official Scrum guide wouldn’t be a problem.

The problem is that estimates in practice are a bastardization of reality. The Scrum guide is vague on the topic so managers take matters into their own hands.

Lane Wagner (Qvault)

I'm a product manager, and I find it incredible that people assume that 'agile' is the same as 'Scrum'. If you're trying to shoehorn the work you do into a development process then, to my mind, you're doing it wrong.

As with the example below, it's all about something that works for your particular context, while bearing in mind the principles of the agile manifesto.


How I trick my well developed procrastination skills

The downside of all those nice methods and tools is that you have to apply them, which can be of course, postponed as well. Thus, the most important step is to integrate your tool or todo list in your daily routine. Whenever you finish a task, or you’re thinking what to do next, the focus should be on your list. For example, I figured out that I always click on one link in my browser favourites (a news website) or an app on my mobile phone (my email app). Sometimes I clicked hundred times a day, even though, knowing that there can’t be any new emails, as I checked one minute ago. Maybe you also developed such a “useless” habit which should be broken or at least used for something good. So I just replaced the app on my mobile and the link in my browser with my Remember The Milk app which shows me the tasks I have to do today. If you have just a paper-based solution it might be more difficult but try to integrate it in your daily routines, and keep it always in reach. After finishing a task, you should tick it in your system, which also forces you to have a look at the task list again.

Wolfgang Gassler

Some useful pointers in this post, especially at the end about developing and refining your own system that depends on your current context.


The Great Asshole Fallacy

The focus should be on the insistence of excellence, both from yourself and from those around you. The wisdom from experience. The work ethic. The drive. The dedication. The sacrifice. Jordan hits on all of those. And he even implies that not everyone needed the “tough love” to push them. But that’s glossed over for the more powerful mantra. Still, it doesn’t change the fact that not only are there other ways to tease such greatness out of people — different people require different methods.

M.G. Siegler (500ish)

I like basketball, and my son plays, but I haven't yet seen the documentary mentioned in this post. The author discusses Michael Jordan stating that "Winning has a price. And leadership has a price." However, he suggests that this isn't the only way to get to excellence, and I would agree.


Header image by Romain Briaux

The shoe that fits one person pinches another; there is no recipe for living that suits all cases

Twitter, the Fediverse, and MoodleNet

In a recent blog post, Twitter made a big deal of the fact that they are testing new conversation settings.

While some people don't necessarily think this is a good idea, I think it's a step forward. In fact, I've actually already tried out this functionality... on the Fediverse.

The Fediverse (a portmanteau of "federation" and "universe") is the ensemble of federated (i.e. interconnected) servers that are used for web publishing (i.e. social networking, microblogging, blogging, or websites) and file hosting, but which, while independently hosted, can intercommunicate with each other.

Wikipedia

That's a mouthful. Let's get to the details of that in a moment and deal with a concrete example instead. Here is a screenshot showing what Twitter has learned from Mastodon (and other federated social networks) in terms of how to make conversations better.

Composing a 'toot' in Mastodon and choosing who can see it

The Fediverse feels like a very different place to Twitter. There's a reason why you will find the marginalised, the oppressed, and very niche interests here: it's a safe space. And, despite macho right-leaning posturing, we all need spaces online where we can be ourselves.


Of course 'federation' and 'decentralisation' aren't words that most of us tend to use on a day-to-day basis. So it's important to define terms here so you can see the inherent difference between using something like Twitter and something like Mastodon.

Note: I can pretty much guarantee by 2030 you'll be using a federated social network of some description. After all, in 2007 people told me Twitter would never catch on, yet a few years later pretty much everyone was using it.)

Taken from docs.joinmastodon.org

Check out the diagram above. On the left, is the representation of a centralised platform. An example of that would be Facebook. You're either on Facebook, or you're not on Facebook. I don't use any of Facebook's products out of a concern for privacy, civil liberties, and the threat they pose to democracy. As a result, my ethical stance means that anything posted to Facebook, Instagram, or WhatsApp is inaccessible to me.It's either have an account on their servers, or you don't.

On the right of the diagram, you can the representation of a distributed social network. Here, every server has a copy of what is on every other server. This is how bittorrent works, and is great for resilience and ensuring things are fault-tolerant. There are a couple of examples of social networks that use this approach (e.g. Scuttlebutt), but they're primarily used for situations where users have intermittent internet access.

Then, in the middle is a federated social network. This is what I'm focusing on in this article. It's kind of how email works; you can email anyone else in the world no matter which email platform they use. GMail users email Outlook users email Fastmail users. Only the data you send and receive with the person you are communicating with resides on each email server; you don't have a copy of everyone in the whole network's email!

So, just as with email, federated social networks have an underlying protocol to ensure that messages from one platform can be understood, displayed, and replied to by another. Those making the platform, of course, have to bake that functionality in; Facebook, Twitter, and the like choose not to do so.

What does this mean in practice? Well, let's take three examples. The first is around 10 years ago when I decided to delete my Facebook account. That means I haven't had an account there, or been able to access any non-public information on that social network for a decade.

On the other hand, about five years ago, I ditched GMail for Protonmail because I wanted to improve the privacy and security of my personal email account. Leaving GMail didn't mean giving up having an email account.

Likewise, a couple of years ago, I decided to leave my Mastodon-powered social.coop account as I was getting some hassle. Instead of quitting the social network, as I would have had to do if this had happened on Facebook, I could quickly and easily move my account to mastodon.social. All of my settings were imported, including all of the people I was following!


An aside about moderation. What Twitter is doing with its new functionality is giving its users tools to do some of their own moderation. Other than that, the only moderation possible within the Twitter network is to 'report' tweets for spam or abuse. Moderators, acting on a network-wide scale then need to figure out whether the tweet contravened their guidelines. Having reported tweets before, this can take days and is often not resolved to anyone's satisfaction.

Contrast that with the Fediverse, where people join instances depending on a range of factors including their geographic location, languages spoken, political and religious beliefs, tolerance for profanity, and so on. Fediverse users are accessing the wider network through a server that is moderated by people they trust. If they stop trusting those moderators they can move their account elsewhere, or even host their own server.

This leads to much faster, more local, and more effective moderation. Instance-level blocking is common, as it should be. After all, you have the right to discuss with other people things I find hateful, but it doesn't mean I have to see them on my timeline.


Post using PixelFed
Post using PixelFed

You may be wondering about what how this looks and feels in practice. The above screenshot is from PixelFed, a federated social network that is a bit like Instagram. The difference, as I'm sure you've already guessed, is that it's federated!

Mastodon timeline showing update from PixelFed

Check out the two posts on my Mastodon timeline above.

The top post is an example of someone on Mastodon 'republishing' the same thing they've posted on Twitter. They've literally had to do the manual work of separately uploading the image and entering the text on each social network, and have to maintain two separate accounts.

The bottom post, on the other hand, is my PixelFed post showing up in my Mastodon feed. No extra work was involved here: anyone's Mastodon account can follow anyone's PixelFed account, and it's all down to the magic of open, federated protocols. In this case, ActivityPub.

There are many federated social networks ⁠— many more, in fact, than are listed on the Wikipedia page for Fediverse. One of my favourites is Misskey just because it's so... Japanese. You can choose whatever suits you, and everything works together.

As the Electronic Frontier Foundation said back in 2011 when writing about federated social networks:

The best way for online social networking to become safer, more flexible, and more innovative is to distribute the ability and authority to the world's users and developers, whose various needs and imaginations can do far more than what any single company could achieve.

Richard Esguerra (EFF)

As many people reading this will be aware, I have skin in this game, a dog in this fight, a horse in this race because of MoodleNet. The difference is that MoodleNet is not only a federated social network, but a decentralised digital commons. Educators join communities to curate collections of openly-licensed resources.

This poses additional design challenges to those faced by existing federated social networks. We're pretty close now to v1.0 beta and have built upon the fantastic thinking and approaches of other federated social networks. In addition, we've added functionality that is specific (at the moment, at least) to MoodleNet, and suits our target audience.

No video above? Try this!

So not so much as a 'conclusion' to this particular piece of writing as a screencast video to show you what I mean with MoodleNet, as well as the judicious use of this emoji: 🤔


Quotation-as-title from Carl Jung. Header image by Md. Zahid Hasan Joy

Saturday shiftings

I think this is the latest I've published my weekly roundup of links. That's partly because of an epic family walk we did today, but also because of work, and because of the length and quality of the things I bookmarked to come back to...

Enjoy!


Graffiti in Hong Kong subway station (translation: “We can’t return to normal, because the normal that we had was precisely the problem.”)

FC97: Portal Economics

Most of us are still trapped in the mental coordinates of a world that isn’t waiting for us on the other side. You can see this in the language journalists are still using. The coronavirus is a ‘strategic surprise’ and we’re still very much in the ‘fog of war,’ dealing with the equivalent of an ‘alien invasion’ or an ‘unexpected asteroid strike.’ As I said back in March though, this is not a natural disaster, like an earthquake, a one-off event from which we can rebuild. It’s not a war or a financial crisis either. There are deaths, but no combatants, no physical resources have been destroyed, and there was no initial market crash, although obviously the markets are now reacting.

The crisis is of the entire system we’ve built. In another article, I described this as the bio-political straitjacket. We can’t reopen our economies, because if we do then more people will die. We can’t keep them closed either, because our entire way of life is built on growth, and without it, everything collapses. We can give up our civil liberties, submitting to more surveillance and control, but as Amartya Sen would say, what good is a society if the cost of our health and livelihoods is our hard fought for freedoms?

Gus Hurvey (Future Crunch)

This is an incredible read, and if you click through to anything this week to sit down and consume with your favourite beverage, I highly recommend this one.


Coronavirus shows us it’s time to rethink everything. Let's start with education

There’s nothing radical about the things we’re learning: it’s a matter of emphasis more than content – of centralising what is most important. Now, perhaps, we have an opportunity to rethink the entire basis of education. As local authorities in Scotland point out, outdoor learning could be the best means of getting children back to school, as it permits physical distancing. It lends itself to re-engagement with the living world. But, despite years of research demonstrating its many benefits, the funding for outdoor education and adventure learning has been cut to almost nothing.

George Monbiot (The Guardian)

To some extent, this is Monbiot using a different stick to bang the same drum, but he certainly has a point about the most important things to be teaching our young people as their future begins to look a lot different to ours.


The Machine Pauses

In 1909, following a watershed era of technological progress, but preceding the industrialized massacres of the Somme and Verdun, E.M. Forster imagined, in “The Machine Stops,” a future society in which the entirety of lived experience is administered by a kind of mechanical demiurge. The story is the perfect allegory for the moment, owing not least to its account of a society-wide sudden stop and its eerily prescient description of isolated lives experienced wholly through screens.

Stuart Whatley (The Hedgehog Review)

No, I didn't know what a 'demiurge' was either. Apparently, it's "an artisan-like figure responsible for fashioning and maintaining the physical universe".

This article, which not only quote E.M. Forster, but also Heidegger and Nathaniel Hawthorne, discusses whether we really should be allowing technology to dictate the momentum of society.


Party in a spreadsheet

Party in a Shared Google Doc

The party has no communal chat log. Whilst I can enable edit permissions for those with the party link, shared google docs don’t not allow for chat between anonymous animals. Instead conversations are typed in cells. There are too many animals to keep track of who is who. I stop and type to someone in a nearby cell. My cursor is blue, theirs is orange. I have no idea if they are a close friend or a total stranger. How do you hold yourself and what do you say to someone when personal context is totally stripped away?

Marie Foulston

I love this so much.


Being messy when everything is clean

[T]o put it another way, people whose working lives can be mediated through technology — conducted from bedrooms and kitchen tables via Teams or Slack, email and video calls — are at much less risk. In fact, our laptops and smartphones might almost be said to be saving our lives. This is an unintended consequence of remote working, but it is certainly a new reality that needs to be confronted and understood.

And many people who can work from a laptop are also less likely to lose their jobs than people who work in the service and hospitality industries, especially those who have well-developed professional networks and high social capital. According to The Economist, this group are having a much better lockdown than most — homeschooling notwithstanding. But then, they probably also had a more comfortable life beforehand.

Rachel Coldicutt (Glimmers)

This post, "a scrapbook of links and questions that explore how civil society might be in a digital world," is a really interesting look at the physicality of our increasingly-digital world and how the messiness of human life is being 'cleaned up' by technology.


Remote work worsens inequality by mostly helping high-income earners

Given its potential benefits, telecommuting is an attractive option to many. Studies have shown a substantial number of workers would even agree to a lower salary for a job that would allow them to work from home. The appeal of remote work can be especially strong during times of crisis, but also exists under more normal circumstances.

The ongoing crisis therefore amplifies inequalities when it comes to financial and work-life balance benefits. If there’s a broader future adoption of telecommuting, a likely result of the current situation, that would still mean a large portion of the working population, many of them low-income workers, would be disadvantaged

Georges A. Tanguay & Ugo Lachapelle (The Conversation)

There's some interesting graphs included in this Canadian study of remote work. While I've written plenty about remote work before, I don't think I've really touched on how much it reinforces white, middle-class, male privilege.

The BBC has an article entitled Why are some people better at working from home than others? which suggests that succeeding and/or flourishing in a remote work situation is down to the individual, rather than the context. The truth is, it's almost always easier to be a man in a work environement ⁠— remote, or otherwise. This is something we need to change.


Unreal engine

A first look at Unreal Engine 5

We’ve just released a first look at Unreal Engine 5. One of our goals in this next generation is to achieve photorealism on par with movie CG and real life, and put it within practical reach of development teams of all sizes through highly productive tools and content libraries.

I remember showing my late grandmother FIFA 18 and her not being able to tell the difference between it and the football she watched regularly on the television.

Even if you're not a gamer, you'll find this video incredible. It shows how, from early next year, cinematic-quality experiences will be within grasp of even small development teams.


Grand illusion: how the pandemic exposed we're all just pretending

Our pretending we’re not drowning is the proof we have that we might still be worth saving. Our performing stability is one of the few ways that we hope we might navigate the narrow avenues that might still get us out.

A thing, though, about perpetuating misperceptions, about pretending – because you’re busy surviving, because you can’t stop playing the rigged game on the off-chance somehow that you might outsmart it, because you can’t help but feel like your circumstances must somehow be your fault – is that it makes it that much harder for any individual within the group to tell the truth.

Lynn Steger Strong (The Guardian)

Wouldn't be amazing if we collectively turned to one another, recognised our collective desire not to play 'the game' any more, and decided to go after those who have rigged the system against us?


How to improve your walking technique

What research shows is that how we walk, our gait mechanics, isn’t as “natural” as we might believe. We learn to walk by observing our parents and the world around us. As we grow up, we embody the patterns we see. These can limit the full potential of our gait. Some of us unconsciouly prevent the pelvis and arms from swinging because of cultural taboos that frown upon having a gait as being, for example, too free.

Suunto

My late, great, friend Dai Barnes was a barefoot runner. He used to talk a lot about how people walk and run incorrectly, partly because of the 'unnatural' cushioning of their feet. This article gives some advice on improving your walking gait, which I tried out today on a long family walk.


Header mage via xkcd

Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say

Post-pandemic surveillance culture

Today's title comes from Edward Snowden, and is a pithy overview of the 'nothing to hide' argument that I guess I've struggled to answer over the years. I'm usually so shocked that an intelligent person would say something to that effect, that I'm not sure how to reply.

When you say, ‘I have nothing to hide,’ you’re saying, ‘I don’t care about this right.’ You’re saying, ‘I don’t have this right, because I’ve got to the point where I have to justify it.’ The way rights work is, the government has to justify its intrusion into your rights.

Edward Snowden

This, then, is the fifth article in my ongoing blogchain about post-pandemic society, which already includes:

  1. People seem not to see that their opinion of the world is also a confession of character
  2. We have it in our power to begin the world over again
  3. There is no creature whose inward being is so strong that it is not greatly determined by what lies outside it
  4. The old is dying and the new cannot be born

It does not surprise me that those with either a loose grip on how the world works, or those who need to believe that someone, somewhere has 'a plan', believe in conspiracy theories around the pandemic.

What is true, and what can easily be mistaken for 'planning' is the preparedness of those with a strong ideology to double-down on it during a crisis. People and organisations reveal their true colours under stress. What was previously a long game now becomes a short-term priority.

For example, this week, the US Senate "voted to give law enforcement agencies access to web browsing data without a warrant", reports VICE. What's interesting, and concerning to me, is that Big Tech and governments are acting like they've already won the war on harvesting our online life, and now they're after our offline life, too.


I have huge reservations about the speed in which Covid-19 apps for contact tracing are being launched when, ultimately, they're likely to be largely ineffective.

[twitter.com/holden/st...](https://twitter.com/holden/status/1260813197402968071?s=20)

We already know how to do contact tracing well and to train people how to do it. But, of course, it costs money and is an investment in people instead of technology, and privacy instead of surveillance.

There are plenty of articles out there on the difference between the types of contact tracing apps that are being developed, and this BBC News article has a useful diagram showing the differences between the two.

TL;DR: there is no way that kind of app is going on my phone. I can't imagine anyone who I know who understands tech even a little bit installing it either.


Whatever the mechanics of how it goes about doing it happen to be, the whole point of a contact tracing app is to alert you and the authorities when you have been in contact with someone with the virus. Depending on the wider context, that may or may not be useful to you and society.

However, such apps are more widely applicable. One of the things about technology is to think about the effects it could have. What else could an app like this have, especially if it's baked into the operating systems of devices used by 99% of smartphone users worldwide?

CC BY-SA 3.0, Link

The above diagram is Marshall McLuhan's tetrad of media effects, which is a useful frame for thinking about the impact of technology on society.

Big Tech and governments have our online social graphs, a global map of how everyone relates to everyone else in digital spaces. Now they're going after our offline social graphs too.


Exhibit A

[twitter.com/globaltim...](https://twitter.com/globaltimesnews/status/1223257710033960960)

The general reaction to this seemed to be one of eye-rolling and expressing some kind of Chinese exceptionalism when this was reported back in January.

Exhibit B

[www.youtube.com/watch](https://www.youtube.com/watch?v=viuR7N6E2LA)

Today, this Boston Dynamics robot is trotting around parks in Singapore reminding everyone about social distancing. What are these robots doing in five years' time?

Exhibit C

[twitter.com/thehill/s...](https://twitter.com/thehill/status/1246592135358484480?s=20)

Drones in different countries are disinfecting the streets. What's their role by 2030?


I think it's drones that concern me most of all. Places like Baltimore were already planning overhead surveillance pre-pandemic, and our current situation has only accelerated and exacerbated that trend.

In that case, it's US Predator drones that have previously been used to monitor and bomb places in the Middle East that are being deployed on the civilian population. These drones operate from a great height, unlike the kind of consumer drones that anyone can buy.

However, as was reported last year, we're on the cusp of photovoltaic drones that can fly for days at a time:

This breakthrough has big implications for technologies that currently rely on heavy batteries for power. Thermophotovoltaics are an ultralight alternative power source that could allow drones and other unmanned aerial vehicles to operate continuously for days. It could also be used to power deep space probes for centuries and eventually an entire house with a generator the size of an envelope.

Linda Vu (TechXplore)

Not only will the government be able to fly thousands of low-cost drones to monitor the population, but they can buy technology, like this example from DefendTex, to take down other drones.

That is, of course, if civilian drones continue to be allowed, especially given the 'security risk' of Chinese-made drones flying around.

It's interesting times for those who keep a watchful eye on their civil liberties and government invasion of privacy. Bear that in mind when tech bros tell you not to fear robots because they're dumb. The people behind them aren't, and they have an agenda.


Header image via Pixabay

Saturday seductions

Having a Bank Holiday in the UK on a Friday has really thrown me this week. So apologies for this link roundup being a bit later than usual...

I do try to inject a little bit of positivity into these links every week, but the past few days have made me a little concerned about our post-pandemic future. Anyway, here goes...


Radio Garden

This popped up in my Twitter feed this week and brought joy to my life. So simple but so effective: either randomly go to, or browse radio stations around the world. The one featured in the screenshot above is one close to me I forgot existed!


COVID and forced experiments

Every time we get a new kind of tool, we start by making the new thing fit the existing ways that we work, but then, over time, we change the work to fit the new tool. You’re used to making your metrics dashboard in PowerPoint, and then the cloud comes along and you can make it in Google Docs and everyone always has the latest version. But one day, you realise that the dashboard could be generated automatically and be a live webpage, and no-one needs to make those slides at all. Today, sometimes doing the meeting as a video call is a poor substitute for human interaction, but sometimes it’s like putting the slides in the cloud.

I don’t think we can know which is which right now, but we’re going through a vast, forced public experiment to find out which bits of human psychology will align with which kinds of tool, just as we did with SMS, email or indeed phone calls in previous generations.

Benedict Evans

An interesting post that both invokes 'green eggs and ham' as a metaphor, and includes an anecdote from an Ofcom report towards the end about a woman named Polly that no-one who does training or usability testing should ever forget.


Education is over…

What future learning environments need is not more mechanization, but more humanization; not more data, but more wisdom; not more
objectification, but more subjectification; not more Plato, but more Aristotle.

William Rankin (regenerative.global)

I agree, although 'subjectification' is a really awkward word that suggests school subjects, which isn't the author's point. After all of this, I can't see parents, in particular, accepting going back to how school has been. At least, I hope not.



What Happens Next?

This guide... is meant to give you hope and fear. To beat COVID-19 in a way that also protects our mental & financial health, we need optimism to create plans, and pessimism to create backup plans. As Gladys Bronwyn Stern once said, “The optimist invents the airplane and the pessimist the parachute.”

Marcel Salathé & Nicky Case

Modelling what happens next in terms of lockdowns, etc. is not an easy think to understand, and there are many competing opinions. This guide, with 'playable simulations' is the best thing I've seen so far, and I feel I'm much better prepared for the next decade (yes, you read that correctly).


Sheltering in Place with Montaigne

By the time Michel de Montaigne wrote “Of Experience,” the last entry in his third and final book of essays, the French statesman and author had weathered numerous outbreaks of plague (in 1585, while he was mayor of Bordeaux, a third of the population perished), political uprisings, the death of five daughters, and an onslaught of physical ailments, from rotting teeth to debilitating kidney stones.

[...]

The ubiquity of suffering heightened Montaigne’s attentiveness to the complexity of human experience. Pleasure, he contends, flows not from free rein but structure. The brevity of existence, he goes on, gives it a certain heft. Exertion, truth be told, is the best form of compensation. Time is slippery, the more reason to grab hold.

Drew Bratcher (The Paris Review)

Montaigne is one of my favourite authors, and having recently read Stefan Zweig's bioraphy of him, he feels even more relevant to our times.


Clarity for Teachers: Day 42

There’s a children’s book that I love, The Greentail Mouse by Leo Lionni. It plays on the old theme of the town mouse and the country mouse. In this telling, the town mouse comes to visit his cousins in their rural idyll, and they ask him about life in the town. It’s horrible, he says, noisy and dangerous, but there is one day a year when it’s amazing, and that’s when carnival comes around. So the country mice decide to hold a carnival of their own: they make costumes and masks, they grunt and shriek and howl and jump around like wild things. But then, at some point, they forget that they are wearing masks; they end up believing that they are the fierce creatures they have been playing at being, and their formerly peaceful community becomes filled with fear, hatred and suspicion.

Dougald Hine

Dougald Hine is taking Charlie Davies' course Clarity for Teachers and is blogging each day about it. This is from the last post in the series. I'm including it partly to point towards Homeward Bound, which I've just signed up for, and which starts next Thursday.



BBC Archive: Empty sets

Give your video calls a makeover, with this selection of over 100 empty sets from the BBC Archive.

Who hasn't wanted to host a pub quiz from the Queen Vic, conduct a job interview from the confines of Fletch's cell, or catch up with friends and family from the bridge of the Liberator in Blake's 7?

I love this idea, to spice up Zoom calls, etc.


People you follow

First I search for my new item of interest, then I filter the results by “People I Follow.” (You can try it out with some of my recent searches: “Roger Angell,” “Captain Beefheart,” and “Rockford Files.”) Depending on the subject, I might have pages and pages of links, all handily selected for me by people I find interesting.

Austin Kleon

In his most recent newsletter, Austin Kleon referenced this post of his from five years ago. I think the idea is a great one and I'll definitely be doing this in future! Twitter move settings around occasionally, but it's still there under 'search filters'.


68 Bits of Unsolicited Advice

Perhaps the most counter-intuitive truth of the universe is that the more you give to others, the more you’ll get. Understanding this is the beginning of wisdom.

Before you are old, attend as many funerals as you can bear, and listen. Nobody talks about the departed’s achievements. The only thing people will remember is what kind of person you were while you were achieving.

Over the long term, the future is decided by optimists. To be an optimist you don’t have to ignore all the many problems we create; you just have to imagine improving our capacity to solve problems.

Kevin Kelly (The Technium)

The venerable KK is now 68 years of age and so has dispensed some wisdom. It's a mixed bag, but I particularly liked these the three bits of advice I've quoted above.


Header image by Ben Jennings.

Happiness is when what you think, what you say, and what you do are in harmony

If we're looking for silver linings around the pandemic, then one startlingly big one is the time people have had to reflect on their lives. When we're busy, we're forced to be pragmatic, and unfortunately that pragmatism can conflict with our core values.

This pragmatism has, certainly in my life, led to there being (small) disconnects between what I feel to be my values on the one hand, and my actions on the other. One thing I've been meaning to do for a while is to take the time to write down what I believe, in the style of Buster Benson's Codex Vitae.

He divides his beliefs into the following areas:

  • Aliens
  • Artificial intelligence
  • Cognitive biases
  • Consciousness
  • Critical thinking
  • Dialogue
  • Ecosystems
  • Game theory
  • Government
  • Health
  • Internal mental space
  • Mindfulness
  • Nature of reality
  • Policy
  • Purpose
  • Rules to live by
  • Spirituality
  • Technology
  • Vulnerability

...which may seem a little bit random, and reminds me somewhat of Jorge Luis Borges' Celestial Emporium of Benevolent Knowledge ("those that from afar look like flies"). Having said that, starting with one's inner ontology is probably the best place to start.

Why do all this? Well, if you know what you believe then it's easier to draw lines, 'red' or otherwise, and know what you will and will not stand for. It's a guide to life, which of course can change over time, but at least serves as a guide.


The reason I've never managed to get around to writing down my beliefs in a way similar to Buster is, I would say, twofold. First, I'm unwilling to write down my religious beliefs, such as they are. Second, all of this looks like a rather large undertaking.

Instead, I'm going to use the rather helpful time horizon that the pandemic provides to think about what I'd like the 'new normal' to look like, about what I'm going to accept and what I am not. These take the form of aphorisms or reminders to myself.


  1. Life is too short to deal with adults who display little in the way of emotional intelligence.
  2. Organisations are groups of people that can have a positive or negative effect on the world. Do not work with or for the latter.
  3. Technology can free people or it can enslave them, so work to give as many people as much freedom as possible.
  4. Removing ego from the equation gets things done.
  5. Education is not the same as learning, nor are qualifications the same as real-world knowledge, skills and experience.
  6. Happiness is not something that you can find, but rather it is something that you discover once you stop looking for it.
  7. How you say or do something is as important as what you say or what you do.
  8. We all will die and don't know when, so act today in a way whereby people will remember you well.
  9. You cannot control what other people say, do, or think.
  10. Money can only buy choices, not happiness, time, or anything else that constitutes human flourishing.

Yours may be different, and these are just want came tumbling out this time around, but these are the ten that I've printed out and stuck to the back of my home office door.


Quotation-as-title by Mahatma Gandhi. Photo by Ishant Mishra.

Saturday scramblings

I've spent a lot more time on Twitter recently, where my feed seems to be equal parts anger and indignation (especially at Andrew Adonis) on the one hand, and jokes, funny anecdotes, and re-posted TikToks on the other.

In amongst all of that, and via Other Sources™, I've also found the following, some of which I think will resonate with you. Let me know on Twitter, Mastodon, or in the comments if that's the case!


School Work and Surveillance

So, what happens now that we're all doing school and work from home?

Well, for one thing, schools are going to be under even more pressure to buy surveillance software — to prevent cheating, obviously, but also to fulfill all sorts of regulations and expectations about "compliance." Are students really enrolled? Are they actually taking classes? Are they doing the work? Are they logging into the learning management system? Are they showing up to Zoom? Are they really learning anything? How are they feeling? Are they "at risk"? What are teachers doing? Are they holding class regularly? How quickly do they respond to students' messages in the learning management system?

Audrey Watters (Hack Education)

Good stuff, as always, by Audrey Watters, who has been warning about this stuff for a decade.


We're knee-deep in shit and drinking cups of tea

Of course this government are failing to deal with a pandemic. At the fag end of neoliberalism, they don’t exist to do much more than transfer public assets into private hands. What we’re living through is exactly what would happen if we’d elected a firm of bailiffs to cure polio.  That’s not to say that they won’t use this crisis, as they would any other, to advance a profoundly reactionary agenda. The austerity they’ll tell us they need to introduce to pay for this will make the last decade seem like Christmas at Elton John’s house.

There’s an old joke about a guy going to hell. The Devil shows him round all the rooms where people are being tortured in a variety of brutal ways. Eventually, they come to a room where everybody is standing knee-deep in shit and drinking cups of tea. The guy chooses this as the place to spend eternity, and the Devil shouts “Tea break’s over lads, back on your heads!” That, I suppose, is how I feel when I hear people crowing about how the government are being forced to implement socialist policies. Pretty soon, we’ll all be back on our heads.

Frankie Boyle (The Overtake)

As comedy has become more political over the last decade, one of the most biting commentators has been the Scottish comedian Frankie Boyle. I highly recommend following him on Twitter.


Novel adventures: 12 video games for when you’re too restless to read

A few keen readers have turned to essay collections, short stories or diaries, which are less demanding on the memory and attention, but video games may also offer a way back into reading during these difficult times. Here are 12 interesting puzzle and adventure games that play with words, text and narratives in innovative ways, which may well guide you back into a reading frame of mind.

Keith Stuart (The Guardian)

I hadn't heard of any of the games on this list (mobile/console/PC) and I think this is a great idea. Also check out the Family Video Game Database.


Career advice for people with bad luck

The company is not your family. Some of the people in the company are your friends in the current context. It’s like your dorm in college. Hopefully some of them will still be your friends after. But don’t stay because you’re comfortable.

[...]

When picking a job, yes, your manager matters. But if you have an amazing manager at a shit company you’ll still have a shit time. In some ways, it’ll actually be worse. If they’re good at their job (including retaining you), they’ll keep you at a bad company for too long. And then they’ll leave, because they’re smart and competent.

Chief of Stuff (Chief's newsletter)

Most of this advice is focused on the tech sector, but I wanted to highlight the above, about 'friends' at work and the relative importance of having a good boss.


Are we too busy to enjoy life?

“You cannot step into the same river twice, for other waters are continually flowing on,” supposedly said Heraclitus. Time is like a river. If you’re too busy to enjoy life—too busy to spend time with friends and family, too busy to learn how to paint or play the guitar, too busy to go on that hike, too busy to cook something nice for yourself—these moments will be gone, and you will never get that time back.

You may think it’s too late. It’s not. Like many people, I personally experience time anxiety—the recurring thought that it’s too late to start or accomplish something new—but the reality is you probably still have many years in front of you. Defining what “time well spent” means to you and making space for these moments is one of the greatest gifts you can make to your future self.

Anne-Laure Le Cunff (Ness Labs)

Quality not quantity. Absolutely, and the best way to do that is to be in control of every area of your life, not beholden to someone else's clock.


Labour HQ used Facebook ads to deceive Jeremy Corbyn during election campaign

Labour officials ran a secret operation to deceive Jeremy Corbyn at last year’s general election, micro-targeting Facebook adverts at the leader and his closest aides to convince them the party was running the campaign they demanded.

Campaign chiefs at Labour HQ hoodwinked their own leader because they disapproved of some of Corbyn’s left-wing messages.

They convinced him they were following his campaign plans by spending just £5,000 on adverts solely designed to be seen by Corbyn, his aides and their favourite journalists, while pouring far more money into adverts with a different message for ordinary voters.

Tim Shipman (The Times)

This article by the political editor of The Times is behind a paywall. However, the above is all you need to get the gist of the story, which reminds me of a story about the CEO of AT&T, the mobile phone network.

At a time when AT&T were known for patchy coverage, technicians mapped where the CEO frequently went (home, work, golf club, etc.) and ensured that those locations had full signal. Incredible.


We can’t grow our way out of poverty

Poverty isn’t natural or inevitable. It is an artifact of the very same policies that have been designed to syphon the lion’s share of global income into the pockets of the rich. Poverty is, at base, a problem of distribution.

Jason Hickel (New Internationalist)

There's some amazing data in this article, along with some decent suggestions on how we can make society work for the many, and not just the few. Also see this: wealth shown to scale.


On Letting Go of Certainty in a Story That Never Ends

Possessed of no such capacity for superior force, fairytale characters are given tasks that are often unfair verging on impossible, imposed by the more powerful—climb the glass mountain, sort the heap of mixed grain before morning, gather a feather from the tail of the firebird. They are often mastered by alliances with other overlooked and undervalued players—particularly old women (who often turn out to be possessed of supernatural powers) and small animals, the ants who sort the grain, the bees who find the princess who ate the honey, the birds who sing out warnings. Those tasks and ordeals and quests mirror the difficulty of the task of becoming faced by the young in real life and the powers that most of us have, alliance, persistence, resistance, innovation. Or the power to be kind and the power to listen—to name two powers that pertain to storytelling and to the characters these particular stories tell of.

Rebecca Solnit (Literary Hub)

What was it Einstein said? “If you want your children to be intelligent, read them fairy tales. If you want them to be more intelligent, read them more fairy tales.”


Private gain must no longer be allowed to elbow out the public good

The term ‘commons’ came into widespread use, and is still studied by most college students today, thanks to an essay by a previously little-known American academic, Garrett Hardin, called ‘The Tragedy of the Commons’ (1968). His basic claim: common property such as public land or waterways will be spoiled if left to the use of individuals motivated by self-interest. One problem with his theory, as he later admitted himself: it was mostly wrong.

Our real problem, instead, might be called ‘the tragedy of the private’. From dust bowls in the 1930s to the escalating climate crisis today, from online misinformation to a failing public health infrastructure, it is the insatiable private that often despoils the common goods necessary for our collective survival and prosperity. Who, in this system based on the private, holds accountable the fossil fuel industry for pushing us to the brink of extinction? What happens to the land and mountaintops and oceans forever ravaged by violent extraction for private gain? What will we do when private wealth has finally destroyed our democracy?

Dirk Philipsen (Aeon)

Good to see more pushback on the notion of 'the tragedy of the commons'. What we need to do is, instead of metaphorically allowing everyone to graze their own cows on the common, we need to socialise all the cows.


Header image by Jaymantri. Gifs via Giphy.

The old is dying and the new cannot be born

Education for a post-pandemic future


Welcome to the fourth instalment in this blog chain about post-pandemic society:

  1. People seem not to see that their opinion of the world is also a confession of character
  2. We have it in our power to begin the world over again
  3. There is no creature whose inward being is so strong that it is not greatly determined by what lies outside it

This time, I want to talk about education. It's been a decade since I left the classroom as a school teacher and senior leader but, just after doing so, I co-kickstarted a project called Purpos/ed: what's the purpose of education? While the original website has long since gone the way of all digital bits and bytes, it can still be accessed via the Internet Archive's Wayback Machine (which may take significantly longer to load than most websites, so be patient!)

There were some fantastic contributions to that project, each of which were 500 words long. We followed that up with image remixes, audio contributions, and even a one-day unconference at Sheffield Hallam university! All of the written contributions were compiled into a book that was published by Scholastic (I've still got a few copies if anyone wants one) and the campaign ended up being featured on the front page of the TES.


My reason for returning to this project is that it seems that many people, especially parents and educators, are once again thinking about the purpose of education. There is even a UNESCO Commission on the Futures of Education to which you can add your voice.

Below are some of my favourite responses to the Purpos/ed campaign, right after a video clip from Prof. Keri Facer, whose work (especially Learning Futures) served as our inspiration.

[vimeo.com/104793994](https://vimeo.com/104793994)

Before the first Purpos/ed post was written, I jotted down my own off-the-cuff answer: "the purpose of education is to aid our meditation on purposes — what should we do, why and how?". I know that's a bit glib, but it adds a reflexive twist to this debate: how sophisticated and sensitive to changing context are our education systems and discourse? I worry we may be in for a rude awakening when the education squabbles of the Easy Times are shown up as an irrelevant sideshow when the Hard Times bite.

David Jennings

Education should not be just be about the ‘system’ or the schools, it should be about the community and drawing on the skills and knowledge that is within our local communities.  Enabling our children to learn from what has gone before to ensure that they enhance their own future. For many education provides an escape, a way out that broadens their horizons and provides them with opportunities that they did not realize existed, that can ultimately provide them with richness and most importantly happiness.

Dawn Hallybone

The internet provides us with rich and free spaces for expansive learning. The institutions only have left their monopoly on funding and on certification. And so capitalism has begun a new project. The first aim is to strike out at democratization of learning by privatizing education, by deepening barriers to equality and access. And the second more audacious aim is to privatize knowledge itself, to turn knowledge and learning into a commodity to be bought and sold like any other consumer good.

Thus we find ourselves at a turning point for the future of education. The contradictions inherent in the different views of the purpose of education do not allow any simple compromise or reform minded tinkering with the system. For those that believe in education as the practice of freedom there are two challenges: to develop a societal discourse around the purpose of education and secondly to develop transformative practice, as teacher students and student teachers.

Graham Attwell

"Education should disrupt as much as it builds" (David White)
CC BY-NC-SA Josie Fraser

Education should critically ensure children, young people and adults are equipped to be unsettled, to be confronted by difference, to be changed, and to effect change. Education is a conduit to different cultures, different places, different times - to different ways of thinking about things and doing things. Education provides us with an introduction to things unimagined and unencountered. It should provide the critical challenge to examine our beliefs, interpretations and horizons, the ability to reexamining ourselves in new contexts, to develop new interests, to review the ways in which we understand ourselves and our place in the world. The purpose of education should be to expand expectations, not to confine them - to support our learners in understanding the impact they can and do have on their world. We cannot expect education built upon, and educators who model, a fixation with certainty and inflexibility to meet the urgent and ongoing needs of pressing social, economic and political change.

Josie Fraser

For me, the purpose of education is to become a better human being; recognising that we share a commonality with others around us and that we are bound to the ones who walked before and the ones to come. It allows us to draw on the experiences of the past and help prepare us to face the future (with all its attendant opportunities and issues). Conceived in this sense, it allows us to remove the primacy of the veneer (worker, teacher, student, friend) and reinstates these (important) roles within the context that they form part of a larger whole. Doing so would also allow us to rethink the relationship of means and ends and unlock the powerful impact this reconfiguration can have for the lives of people around us when we do treat them as they should be.

Nick Dennis

The desire to learn is woven into the concept of contentment and that, for me at least, is the basic purpose of any education system. Contentment can flourish into happiness, riches, recognition or any other myriad of emotional and material gain. But without a content society, with an ambition to continually discover and question the world around them throughout life, we end up with society's biggest enemies: complacency, stagnancy, apathy and ambivalence.

Ewan mcintosh

CC BY-NC-SA ianguest

An educated population is probably the least governable, the most likely to rebel, the most stubborn and the most critical. But it is a population capable of the most extraordinary things, because each person strides purposefully forward, and of their own volition, together, they seek a common destiny.

Stephen Downes

Education, it seems, is the method by which we attempt to make the world come out the way we want it to. It is about using our power to shape and control the world to come so that it comes into line with our own hopes and dreams. In any way we move it, even towards chaos and anarchy, we are still using our power to shape and control the future.

Dave Cormier

It is make or break time for humanity and we have a responsibility to draw a line in the sand, admit our mistakes and create a system of education that can begin to undo the harm that we have done to the world. For all the talk over the last twenty years of the ‘global village’, it has not stopped us continuing to destroy our planet, to wage wars and to continue to ignore the inequalities in society. What is the purpose of education? Surely, it is to create unity by helping future generation to recognise the values that humanity share.

James mIchie

As Purpos/ed was a non-partisan campaign, Andy Stewart and I didn't give our views on the purpose of education. But perhaps, in a follow-up post, it's time to explicitly state what, for me, it's all about? I'd certainly like to read what others are thinking...


Quotation-as-title from Antonio Gramsci. Header image via Pixabay.

Saturday sandcastles

The photos of brutalist sandcastles accompanying this week's link roundup made me both smile and really miss care-free walks on the beach. Although technically we're still allowed to visit the coast, our local council has closed nearby car parks.

This week I've been busy, busy, but managed to squeeze in a bit of non-fiction reading, the best of which I'm sharing below. Oh, and one link that I can' really quote is UnblockIt which was shared via our team chat this week. If your ISP filters certain sites, you might want to bookmark it...


There will be no 'back to normal'

In this article, we summarise and synthesise various - often opposing - views about how the world might change. Clearly, these are speculative; no-one knows what the future will look like. But we do know that crises invariably prompt deep and unexpected shifts, so that those anticipating a return to pre-pandemic normality may be shocked to find that many of the previous systems, structures, norms and jobs have disappeared and will not return.

Nesta

I'm going to return to this article time and again, as it breaks down in a really helpful way what's likely to happen post-pandemic in the following areas: political, economic, sociocultural, technological, legal, and environmental.


Plan for 5 years of lockdown

I’m attempting to be pragmatic. I think this is one of those times where we should hope for the best but plan for the worst. Crucially, I think that a terrifying number of people are in denial about the timescales of disruption that Covid-19 will cause, and this is causing them to make horrible personal and professional decisions. I believe that we have a responsibility to consider any reasonably likely worst case scenario, and take appropriate steps to mitigate it. But to do that we have to be honest about the worst case.

Patrick Gleeson

It's hard to disagree with the points made in this post, especially as the scenario planning that universities are doing seems to point in the same direction. Having said that, I don't think 'lockdown' will mean the same thing everywhere and at each stage of the pandemic.


'Will coronavirus change our attitudes to death? Quite the opposite'

For centuries, people used religion as a defence mechanism, believing that they would exist for ever in the afterlife. Now people sometimes switch to using science as an alternative defence mechanism, believing that doctors will always save them, and that they will live for ever in their apartment. We need a balanced approach here. We should trust science to deal with epidemics, but we should still shoulder the burden of dealing with our individual mortality and transience.

The present crisis might indeed make many individuals more aware of the impermanent nature of human life and human achievements. Nevertheless, our modern civilisation as a whole will most probably go in the opposite direction. Reminded of its fragility, it will react by building stronger defences. When the present crisis is over, I don’t expect we will see a significant increase in the budgets of philosophy departments. But I bet we will see a massive increase in the budgets of medical schools and healthcare systems.

Yuval Noah Harari

Some amazing writing, as ever, by Harari, who argues that, because our secular societies focus on the here and now rather than the afterlife, science has almost become a religion.


Brutalist sandcastle 02

A startup debt to talk about more: emotional debt

We incur emotional debt whenever there’s an experience we’ve had, but not fully digested in all aspects of it. In my trauma therapy training I learned that this is in fact a natural and important human survival skill. Imagine you’re living in a pre-historic village and it gets raided by a neighboring tribe. Although no one gets killed, a number of houses have been burned down and food has been stolen. The next morning the most important tasks for everyone are to protect the village again, rebuild the houses and hunt for food to survive. Many of the villagers will have been deeply traumatized from the fears and terror they experienced in their bodies. Since food and shelter takes first priority to humans, not processing these emotions for now is a debt that’s necessary and important to incur. We can put it aside and leave it stuck in our bodies, ready to reengage and digest it later. It’s a great survival feature if you will.

A couple of weeks later when everything has been rebuilt, there might be a chance for the local shaman to offer a ritual around the fireplace where everyone can gather and re-experience the emotions that were too difficult to deal with at the actual event of the raid: the rage and anger towards the attackers, the fear and the terror over their lives and eventually the grief for the loss of their goods and most importantly their safety. Once that has been felt and integrated, everyone is able to move on and the night of the village raid can safely go into the history books, fairy tales and heroes journey accounts that luckily everyone survived, yet learned from.

Leo Widrich

While this is framed in terms of startups, I think every organisation has 'emotional debt' that they have to deal with. I like this framing, and will be using it from now on to explain why teams need times of compression and decompression (instead of never-ending 'sprints').


Don’t let remote leadership bring out the worst in you

Recognize that the pressure you apply is a reaction to a construct of control. You think you can control people – and things – and the reality is you can’t. The quicker you can realize this, the sooner you can shift to a frame of mind where you can focus constructively on the things that actually help your team, such as: (1) Making it clear why the work matters (2) Creating milestones to help that person achieve that work (3) Giving as much context as possible so they can make the best decisions (4) Helping them think through tough problems they encounter.

Claire Lew

I've led a remote team for a couple of years now, and worked remotely for six years before that. Despite this, it's easy to fall into bad habits, so this is a useful article to remind all leaders (most of whom are remote now!) that the amount of time someone spends on something does not equate to progress made.



Google Apple Contact Tracing (GACT): a wolf in sheep’s clothes.

But the bigger picture is this: it creates a platform for contact tracing that works all across the globe for most modern smart phones (Android Marshmallow and up, and iOS 13 capable devices) across both OS platforms. Unless appropriate safeguards are in place (including, but not limited to, the design of the system as described above – we will discuss this more below) this would create a global mass-surveillance system that would reliably track who has been in contact with whom, at what time and for how long. (And where, if GPS is used to record the location.) GACT works much more reliably and extensively than any other system based on either GPS or mobile phone location data (based on cell towers) would be able to (under normal conditions). I want to stress this point because some people have responded to this threat saying that this is something companies like Google (using their GPS and WiFi names based location history tool) can already do for years. This is not the case. This type of contact tracing really brings it to another level.

Jaap-Henk Hoepman

This, by a professor in the Netherlands who focuses on 'privacy by design' is why I'm really concerned about the Google/Apple Contact Tracing (GACT) programme. It's only likely to be of marginal help in fighting the virus, but sets up a global surveillance network for decades to come.


Brutalist sandcastle 03

In this Zombie Apocalypse, your Homework is due at 5pm

Year in and year out, when school’s in, children know that they are to be at certain places at certain times, doing particular tasks in particular ways. And now, weeks loom ahead where they are faced with many of the same tasks, absent of all the pomp and circumstance. This is the ultimate zombie apocalypse nightmare—a pandemic has hit the world with a mighty force, schools and tuition centers are shut, and homework is still due. Children are adaptable creatures, but it will be challenging for many, if not most, to do all that they are expected to do under these altered conditions.

Youyenn Teo

I was attracted to this article by its great title, but it's actually an interesting insight into both education in a Singaporean context and the gendered nature of care in our societies.


Free Money for Surfers: A Genealogy of the Idea of Universal Basic Income

As cash transfers are increasingly seen as the ideal way to confront the magnitude of the coronavirus threat, it is unclear whether our political imagination is truly up to the task. The current crisis might accelerate rather than decrease our dependency on the market, strengthening capital’s grip on society. Large-scale public works are evidently unfeasible with physical distancing. But, with a clear medical equipment shortage and lacking trained personnel, there is obvious space for public planning responses, and “production for use value” seems ever more necessary. None of these ills will be solved by cash transfers.

Anton Jäger & Daniel Zamora

This, in the Los Angeles Review of Books, considers a new work by Peter Sloman entitled The Idea of a Guaranteed Income and the Politics of Redistribution in Modern Britain. Having previously been cautiously optimistic about Universal Basic Income (or 'cash transfers') I'm not so sure it would all work out so well. I'd rather we funded things like the NHS, but then that might be my white male privilege speaking.


How we made the Keep Calm and Carry On poster

I first found the poster in 2000, folded up at the bottom of a box of books we had bought at an auction. I liked it straight away and showed it to my wife Mary – she had it framed and put up in the shop. The next thing we found was that customers wanted to buy it. I suggested we make copies but Mary said: “No, it’ll spoil the purity.” She went away for a week’s holiday, so I secretly got 500 copies made.

Stuart Manley (interviewed by malcolm jack)

This ridiculously-famous poster was discovered in a wonderful second-hand bookshop not too far away from us, and which we visit several times per year. I love the story behind it.


Images via The Guardian: For one tide only: modernist sandcastles – in pictures

Thus each man ever flees himself

There are some days during this current pandemic when, coccooned in my little bubble, I can forget for a few hours that the world has changed. Conversely, I encounter other days when my baseline existential angst spikes to a level just below "rocking backwards-and-forwards in the corner of the room".

There are a range of ways for obtaining help in such situations, including professional (therapy!), spiritual (religion!) and medical (drugs!) However, while I've dabbled with all three, perhaps my greatest solace comes from bunch of balding white dudes who lived a couple of thousand years ago.

Yes, I'm talking about the Stoics. Having re-read the Seneca's On the Tranquility of the Mind this week, I thought there were whole sections worth sharing for anyone in a similar predicament to me.


In this dialogue, Serenus explains to Seneca his problem. The details may have changed over the years (no slaves, and we tend not to be so envious about other people's crockery) but the gist is, at least for me, immediately recognisable:

The nature of this mental weakness which hovers between two alternatives, inclining strongly neither to the right nor to the wrong, I can better show you one part at a time than all at once; I will tell you my experience, you will find a name for my sickness. I am completely devoted, I admit, to frugality: I do not like a couch made up for show, or clothing produced from a chest or pressed by weights and a thousand mangles to make it shiny, but rather something homely and inexpensive that has not been kept specially or needs to be put on with anxious care; I like food that a household of slaves has not pr pared, watching it with envy, that has not been ordered many days in advance or served up by many hands, but is easy to fetch and in ample supply; it has nothing outlandish or expensive about it, and will be readily available everywhere, it will not put a strain on one’s purse or body, or return by the way it entered; I like for my servant a young house-bred slave without training or polish, for silverware my country-bred father’s heavy plate that bears no maker’s stamp, and for a table one that is not remarkable for the variety of its markings or known to Rome for having passed through the hands of many stylish owners, but one that is there to be used, that makes no guest stare at it in endless pleasure or burning envy. Then, after finding perfect satisfaction in all such things, I find my mind is dazzled by the splendour of some training-school for pages, by the sight of slaves decked out in gold and more scrupulously dressed than bearers in a procession, and a whole troop of brilliant attendants; by the sight of a house where even the floor one treads is precious and riches are strewn in every corner, where the roofs themselves shine out, and the citizen body waits in attendance and dutifully accompanies an inheritance whose days are numbered; need I mention the waters, transparent to the bottom and flowing round the guests even as they dine, or the banquets that in no way disgrace their setting? Emerging from a long time of dedication to thrift, luxury has enveloped me in the riches of its splendour, filling my ears with all its sounds: my vision falters a little, for it is easier for me to raise my mind to it than my eyes; and so I come back, not a worse man, but a sadder one, I no longer walk with head so high among those worthless possessions of mine, and I feel the sharpness of a secret pain as the doubt arises whether that life is not the better one. None of these things alters me, but none fails to unsettle me.

'Serenus' (in Seneca's 'On The Tranquility of the Mind')

As a result, Serenus asks Seneca for help, as he feels stuck between two stools: asceticism and luxury:

I ask you, therefore, if you possess any cure by which you can check this fluctuation of mine, to consider me worthy of being indebted to you for tranquillity. I am aware that these mental disturbances I suffer from are not dangerous and bring no threat of a storm; to express to you in a true analogy the source of my complaint, it is not a storm I labour under but seasickness: relieve me, then, of this malady, whatever it be, and hurry to aid one who struggles with land in his sight.

'Serenus' (in seneca's 'On The Tranquility of the Mind')

For me, Serenus' description of his 'mental disturbances' as being like seasickness really resonate with me. As a friend said earlier this week, we're both a little tired of the "constant up and down".

Seneca restates Serenus' problem, first stating what he doesn't require:

Accordingly, you have no need of those harsher measures that we have already passed over, that of sometimes opposing yourself, of sometimes getting angry with yourself, of sometimes fiercely driving yourself on, but rather of the one that comes last, having confidence in yourself and believing that you are on the right path and have not been sidetracked by the footprints crossing over, left by many rushing in different directions, some of them wandering close to the path itself.

seneca, 'On The Tranquility of the Mind'

Another useful metaphor, of being sidetracked by other people's, and perhaps your own, footprints. Instead what Seneca explains that Serenus needs to have "confidence" in himself, and believe that he is "on the right path".

Don't we all need that?

Seneca continues by saying that everyone is in the same boat, which might as well be named The Human Condition. What he diagnoses as the nub of the problem, which is think is particularly insightful, is our attempts to keep changing things. Ultimately, this simply means we live in a constant state of suspense and dissatisfaction.

Everyone is in the same predicament, both those who are tormented by inconstancy and boredom and an unending change of purpose, constantly taking more pleasure in what they have just abandoned, and those who idle away their time, yawning. Add to them those who twist and turn like insomniacs, trying all manner of positions until in their weariness they find repose: by altering the condition of their life repeatedly, they end up finally in the state that they are caught, not by dislike of change, but by old age that is reluctant to embrace anything new. Add also those who through the fault, not of determination but of idleness, are too constant in their ways, and live their lives not as they wish, but as they began. The sickness has countless characteristics but only one effect, dissatisfaction with oneself. This arises from a lack of mental balance and desires that are nervous or unfulfilled, when men’s daring or attainment falls short of their desires and they depend entirely on hope; such are always lacking in stability and changeable, the inevitable consequence of living in a state of suspense.

seneca, 'On The Tranquility of the Mind'

Next, Seneca seemingly reaches through the ages to drive his point home with sentences which, despite being aimed at his interlocutor, seem targeted at me.

All these feelings are aggravated when disgust at the effort they have spent on becoming unsuccessful drives men to leisure, to solitary studies, which are unendurable for a mind intent on a public career, eager for employment, and by nature restless, since without doubt it possesses few enough resources for consolation; for this reason, once it has been deprived of those delights that business itself affords to active participants, the mind does not tolerate home, solitude, or the walls of a room, and does not enjoy seeing that it has been left to itself. This is the source of that boredom and dissatisfaction, of the wavering of a mind that finds no rest anywhere, and the sad and spiritless endurance of one’s leisure; and particularly when one is ashamed to confess the reasons for these feelings, and diffidence drives its torments inwards, the desires, confined in a narrow space from which there is no escape, choke one-another; hence come grief and melancholy and the thousand fluctuations of an uncertain mind, held in suspense by early hopes and then reduced to sadness once they fail to materialize; this causes that feeling which makes men loathe their own leisure and complain that they themselves have nothing to keep them occupied, and also the bitterest feelings of jealousy of other men’s successes.

Seneca, 'On The Tranquility of the Mind'

Seneca continues to give Serenus more advice in the dialogue, but, every time I read these opening few pages, I feel like he has diagnosed not only my condition, and that of all humankind.

While some people are always on the lookout for the new and the novel, I'm realising that the best way to spend the second half of my life might well be to spend a good amount of time wringing out as much value from things I've already discovered.


The quotations in this post are from the Oxford World's Classics version of Seneca's Dialogues and Essays. If you can't find it in your local library, try here.

If you're new to the Stoics, may I suggest starting with The Enchiridion by Epictetus? I'd follow that with Marcus Aurelius' Meditations (buy a decent quality dead-tree version; you'll thank me in years to come) and then dip into Seneca's somewhat voluminous works.


Header image by Simon Migaj. Quotation-as-title from Lucretius, who Seneca quotes in 'On the Tranquility of the Mind'.

Saturday scrubbings

This week on Thought Shrapnel I've been focused on messing about with using OBS to create videos. So much, in fact, that this weekend I'm building a new PC to improve the experience.

Sometimes in these link roundups I try and group similar kinds of things together. But this week, much as I did last week, I've just thrown them all in a pot like Gumbo.

Tell me which links you find interesting, either in the comments, or on Twitter or the Fediverse (feel free to use the hashtag #thoughtshrapnel)


Melting Ice Reveals a “Lost” Viking-Era Pass in Norway’s Mountains

About 60 artifacts have been radiocarbon dated, showing the Lendbreen pass was widely used from at least A.D. 300. “It probably served as both an artery for long-distance travel and for local travel between permanent farms in the valleys to summer farms higher in the mountains, where livestock grazed for part of the year,” says University of Cambridge archaeologist James Barrett, a co-author of the research.

Tom Metcalfe (Scientific American)

I love it when the scientific and history communities come together to find out new things about our past. Especially about the Vikings, who were straight-up amazing.


University proposes online-only degrees as part of radical restructuring

Confidential documents seen by Palatinate show that the University is planning “a radical restructure” of the Durham curriculum in order to permanently put online resources at the core of its educational offer, in response to the Covid-19 crisis and other ongoing changes in both national and international Higher Education.

The proposals seek to “invert Durham’s traditional educational model”, which revolves around residential study, replacing it with one that puts “online resources at the core enabling us to provide education at a distance.” 

Jack Taylor & Tom Mitchell (Palatinate)

I'm paying attention to this as Durham University is one of my alma maters* but I think this is going to be a common story across a lot of UK institutions. They've relied for too long on the inflated fees brought in by overseas students and now, in the wake of the pandemic, need to rapidly find a different approach.

*I have a teaching qualification and two postgraduate degrees from Durham, despite a snooty professor telling me when I was 17 years old that I'd never get in to the institution 😅


Abolish Silicon Valley: memoir of a driven startup founder who became an anti-capitalist activist

Liu grew up a true believer in "meritocracy" and its corollaries: that success implies worth, and thus failure is a moral judgment about the intellect, commitment and value of the failed.

Her tale -- starting in her girlhood bedroom and stretching all the way to protests outside of tech giants in San Francisco -- traces a journey of maturity and discovery, as Liu confronts the mounting evidence that her life's philosophy is little more than the self-serving rhetoric of rich people defending their privilege, the chasm between her lived experience and her guiding philosophy widens until she can no longer straddle it.

Cory Doctorow (Boing Boing)

This book is next on my non-fiction reading list. If your library is closed and doesn't have an online service, try this.


Cup, er, drying itself...

7 things ease the switch to remote-only workplaces

You want workers to post work as it’s underway—even when it’s rough, incomplete, imperfect. That requires a different mindset, though one that’s increasingly common in asynchronous companies. In traditional companies, people often hesitate to circulate projects or proposals that aren’t polished, pretty, and bullet-proofed. It’s a natural reflex, especially when people are disconnected from each other and don’t communicate casually. But it can lead to long delays, especially on projects in which each participant’s progress depends on the progress and feedback of others. Location-independent companies need a culture in which people recognize that a work-in-progress is likely to have gaps and flaws and don’t criticize each other for them. This is an issue of norms, not tools.

Edmund L. Andrews-Stanford (Futurity)

I discovered this via Stephen Downes, who highlights the fifth point in this article ('single source of truth'). I've actually highlighted the sixth one ('breaking down the barriers to sharing work') as I've also seen that as an important thing to check for when hiring.


How the 5G coronavirus conspiracy theory tore through the internet

The level of interest in the coronavirus pandemic – and the fear and uncertainty that comes with it – has caused tired, fringe conspiracy theories to be pulled into the mainstream. From obscure YouTube channels and Facebook pages, to national news headlines, baseless claims that 5G causes or exacerbates coronavirus are now having real-world consequences. People are burning down 5G masts in protest. Government ministers and public health experts are now being forced to confront this dangerous balderdash head-on, giving further oxygen and airtime to views that, were it not for the major technology platforms, would remain on the fringe of the fringe. “Like anti-vax content, this messaging is spreading via platforms which have been designed explicitly to help propagate the content which people find most compelling; most irresistible to click on,” says Smith from Demos.

James temperton (wired)

The disinformation and plain bonkers-ness around this 'theory' of linking 5G and the coronavirus is a particularly difficult thing to deal with. I've avoided talking about it on social media as well as here on Thought Shrapnel, but I'm sharing this as it's a great overview of how these things spread — and who's fanning the flames.


A Manifesto Against EdTech© During an Emergency Online Pivot

The COVID-19 pandemic is an unprecedented moment in the history of social structures such as education. After all of the time spent creating emergency plans and three- or five-year road maps that include fail safe options, we find ourselves in the actual emergency. Yet not even a month into global orders of shelter in place, there are many education narratives attempting to frame the pandemic as an opportunity. Extreme situations can certainly create space for extraordinary opportunities, but that viewpoint is severely limited considering this moment in time. Perhaps if the move to distance/online/remote education had happened in a vacuum that did not involve a global pandemic, millions sick, tens of thousands dead, tens of millions unemployed, hundreds of millions hungry, billions anxious and uncertain of society’s next step…perhaps then this would be that opportunity moment. Instead, we have a global emergency where the stress is felt everywhere but it certainly is not evenly distributed, so learning/aligning/deploying/assessing new technology for the classroom is not universally feasible. You can’t teach someone to swim while they’re drowning.

Rolin Moe

Rolin Moe is a thoughtful commentator on educational technology. This post was obviously written quickly (note the typo in the URL when you click through, as well as some slightly awkward language) and I'm not a fan of the title Moe has settled on. That being said, the point about this not being an 'opportunity' for edtech is a good one.


Dishes washing themselves

NHS coronavirus app: memo discussed giving ministers power to 'de-anonymise' users

Produced in March, the memo explained how an NHS app could work, using Bluetooth LE, a standard feature that runs constantly and automatically on all mobile devices, to take “soundings” from other nearby phones through the day. People who have been in sustained proximity with someone who may have Covid-19 could then be warned and advised to self–isolate, without revealing the identity of the infected individual.

However, the memo stated that “more controversially” the app could use device IDs, which are unique to all smartphones, “to enable de-anonymisation if ministers judge that to be proportionate at some stage”. It did not say why ministers might want to identify app users, or under what circumstances doing so would be proportionate.

David Pegg & Paul Lewis (The Guardian)

This all really concerns me, as not only is this kind of technology only going be of marginal use in fighting the coronavirus, once this is out of the box, what else is it going to be used for? Also check out Vice's coverage, including an interview with Edward Snowden, and this discussion at Edgeryders.


Is This the Most Virus-Proof Job in the World?

It’s hard to think of a job title more pandemic-proof than “superstar live streamer.” While the coronavirus has upended the working lives of hundreds of millions of people, Dr. Lupo, as he’s known to acolytes, has a basically unaltered routine. He has the same seven-second commute down a flight of stairs. He sits in the same seat, before the same configuration of lights, cameras and monitors. He keeps the same marathon hours, starting every morning at 8.

Social distancing? He’s been doing that since he went pro, three years ago.

For 11 hours a day, six days a week, he sits alone, hunting and being hunted on games like Call of Duty and Fortnite. With offline spectator sports canceled, he and other well-known gamers currently offer one of the only live contests that meet the standards of the Centers for Disease Control and Prevention.

David Segal (The New York Times)

It's hard to argue with my son these days when he says he wants to be a 'pro gamer'.

(a quick tip for those who want to avoid 'free registration' and some paywalls — use a service like Pocket to save the article and read it there)


Capitalists or Cronyists?

To be clear, socialism may be a better way to go, as evidenced by the study showing 4 of the 5 happiest nations are socialist democracies. However, unless we’re going to provide universal healthcare and universal pre-K, let’s not embrace The Hunger Games for the working class on the way up, and the Hallmark Channel for the shareholder class on the way down. The current administration, the wealthy, and the media have embraced policies that bless the caching of power and wealth, creating a nation of brittle companies and government agencies.

Scott Galloway

A somewhat rambling post, but which explains the difference between a form of capitalism that (theoretically) allows everyone to flourish, and crony capitalism, which doesn't.


Header image by Stephen Collins at The Guardian

Creating and seeding your own torrents using archive.org and Transmission

Update: fixed video!

(no video above? click here!)

I've been experimenting this Easter weekend, and today did an impromptu livestream via Periscope. My focus was on using the Internet Archive and Transmission to create and seed torrents.

As I stated back when I was much, much younger(!) and I've blogged about recently, I think bittorrent is massively under-used in education, especially thinking about sharing entire courses or certainly lots of resources at a time.

For those interested, I downloaded the Periscope video via pscp.download.

3 apps to help avoid post-pandemic surveillance culture [VIDEO]

This is an experiment using a green screen and OBS. Let me know what you think!

Briar
Tor
LibreTorrent
F-Droid

Friday fashionings

When sitting down to put together this week's round-up, which is coming to you slightly later than usual because of <gestures indeterminately> all this, I decided that I'd only focus on things that are positive; things that might either raise a smile or make you think "oh, interesting!"

Let me know if I've succeeded in the comments below, via Twitter, Mastodon, or via email!


Digital Efficiency: the appeal of the minimalist home screen

The real advantage of going with a launcher like this instead of a more traditional one is simple: distraction reduction and productivity increases. Everything done while using this kind of setup is deliberate. There is no scrolling through pages upon pages of apps. There is no scrolling through Google Discover with story after story that you will probably never read. Instead between 3–7 app shortcuts are present, quick links to clock and calendar, and not much else. This setup requires you as the user to do an inventory of what apps you use the most. It really requires the user to rethink how they use their phone and what apps are the priority.

Omar Zahran (UX Collective)

A year ago, I wrote a post entitled Change your launcher, change your life about minimalist Android launchers. I'm now using the Before Launcher, because of the way you can easily and without any fuss customise notifications. Thanks to Ian O'Byrne for the heads-up in the We Are Open Slack channel.


It's Time for Shoulder Stretches

Cow face pose is the yoga name for that stretch where one hand reaches down your back, and the other hand reaches up. (There’s a corresponding thing you do with your legs, but forget it for now—we’re focusing on shoulders today.) If you can’t reach your hands together, it feels like a challenging or maybe impossible pose.

Lifehacker UK

I was pretty shocked that I couldn't barely do this with my right hand at the top and my left at the bottom. I was very shocked that I got nowhere near the other way around. It just goes to show that those people who work at home really need to work on back muscles and flexibility.


Dr. Seuss’s Fox in Socks Rapped Over Dr. Dre’s Beats

As someone who a) thinks Dr. Dre was an amazing producer, and b) read Dr. Seuss’s Fox in Socks to his children roughly 1 million times (enough to be able to, eventually, get through the entire book at a comically high rate of speed w/o any tongue twisting slip-ups), I thought Wes Tank’s video of himself rapping Fox in Socks over Dre’s beats was really fun and surprisingly well done.

Jason Kottke

One of the highlights of my kids being a bit younger than they are now was to read Dr. Suess to them. Fox in Socks was my absolute tongue-twisting favourite! So this blew me away, and then when I went through to YouTube, the algorithm recommended Daniel Radcliffe (the Harry Potter star) rapping Blackalicious' Alphabet Aerobics. Whoah.


Swimming pool with a view

Google launches free version of Stadia with a two-month Pro trial

Google is launching the free version of its Stadia game streaming service today. Anyone with a Gmail address can sign up, and Google is even providing a free two-month trial of Stadia Pro as part of the launch. It comes just two months after Google promised a free tier was imminent, and it will mean anyone can get access to nine titles, including GRID, Destiny 2: The Collection, and Thumper, free of charge.

Tom Warren (The Verge)

This is exactly the news I've been waiting for! Excellent.


Now is a great time to make some mediocre art

Practicing simple creative acts on a regular basis can give you a psychological boost, according to a 2016 study in the Journal of Positive Psychology. A 2010 review of more than 100 studies of art’s impact on health revealed that pursuits like music, writing, dance, painting, pottery, drawing, and photography improved medical outcomes, mental health, social networks, and positive identity. It was published in the American Journal of Public Health.

Gwen Moran (Fast Company)

I love all of the artists on Twitter and Instagram giving people daily challenges. My family have been following along with some of them!


What do we hear when we dream?

[R]esearchers at Norway's Vestre Viken Hospital Trust and the University of Bergen conducted a small study to quantify the auditory experience of dreamers. Why? Because they wanted to "assess the relevance of dreaming as a model for psychosis." Throughout history, they write, psychologists have considered dreamstates to be a model for psychosis, yet people experiencing psychosis usually suffer from auditory hallucinations far more than visual ones. Basically, what the researchers determined is that the reason so little is known about auditory sensations while dreaming is because, well, nobody asks what people's dreams sound like.

David Pescovitz (Boing boing)

This makes sense, if you think about it. The advice for doing online video is always that you get the audio right first. It would seem that it's the same for dreaming: that we pay attention more to what we 'hear' than what we 'see'.



How boredom can inspire adventure

Humans can’t stand being bored. Studies show we’ll do just about anything to avoid it, from compulsive smartphone scrolling right up to giving ourselves electric shocks. And as emotions go, boredom is incredibly good at parting us from our money – we’ll even try to buy our way out of the feeling with distractions like impulse shopping.

Erin Craig (BBC Travel)

The story in this article about a prisoner of war who dreamed up a daring escape is incredible, but does make the point that dreaming big when you're locked down is a grat idea.


But what could you learn instead?

“What did you learn today,” is a fine question to ask. Particularly right this minute, when we have more time and less peace of mind than is usually the norm.

It’s way easier to get someone to watch–a YouTube comic, a Netflix show, a movie–than it is to encourage them to do something. But it’s the doing that allows us to become our best selves, and it’s the doing that creates our future.

It turns out that learning isn’t in nearly as much demand as it could be. Our culture and our systems don’t push us to learn. They push us to conform and to consume instead.

The good news is that each of us, without permission from anyone else, can change that.

Seth Godin

A timely, inspirational post from the always readable (and listen-worthy) Seth Godin.


The Three Equations for a Happy Life, Even During a Pandemic

This column has been in the works for some time, but my hope is that launching it during the pandemic will help you leverage a contemplative mindset while you have the time to think about what matters most to you. I hope this column will enrich your life, and equip you to enrich the lives of the people you love and lead.

Arthur C. Brooks (The atlantic)

A really handy way of looking at things, and I'm hoping that further articles in the series are just as good.


Images by Kevin Burg and Jamie Beck (they're all over Giphy so I just went to the original source and used the hi-res versions)

There is no creature whose inward being is so strong that it is not greatly determined by what lies outside it

Mental health, imagination, and post-pandemic futures

I guess, given that this is the third straight week I've written on the subject, that this could be considered a blogchain on post-pandemic reality. I'm fine with that, and although there's no need to read the previous two posts, you might want to do so for background:

  1. People seem not to see that their opinion of the world is also a confession of character
  2. We have it in our power to begin the world over again

In this post I want to talk about the effect of this period of lockdown on our collective mental health and ability to imagine the future.

The caveat is that I don't inhabit anyone else's brain than my own, and therefore am extrapolating from one specific example. I'm told that in statistics that's not recommended.


There are five very broad categories of people during this lockdown. You can imagine it as a spectrum, as there are those who are:

  • Working from home, and have done for a while
  • Working from home, and are new to it
  • Working at their usual place of work
  • Not working because they are unemployed
  • Not working because they are ill/retired

It's fair to say that the lockdown affects these groups in different ways. However, I think that they share quite a lot in common.

For people in all five groups, whatever their current status, they had plans for the future. Let's look at those out of work first: if you're ill, your plan is probably to get better; if you're retired you may have plans to visit the grandkids; or if you're unemployed the chances are you're looking forward to getting a job.


If you're employed, no matter where you work, then you're looking forward to any number of things: that promotion, the conference you're attending in a few months' time; or even just finishing the project you're working on.

Muppets

Instead, you're stuck at home. And as Christine Grové points out in this article about the longer-term effects of the coronavirus on education, that can have mental health implications ⁠— what some term a 'social recession':

A social recession can have profound physical, economic and psychological effects. Though we are in uncharted territory, data suggests that quarantine can seriously affect people’s mental health, leading to anger, confusion and post-traumatic stress symptoms. As this pandemic continues, the continuous provision of mental health information is critical. Honest and fast communication about how to reduce isolation and increase connection while physically distancing is essential. Health messages need to also include specific ways to look after your mental health. As governments and health regulatory bodies respond to the impacts of the pandemic, an interdisciplinary expert task force on the short- and long-term mental health effects is urgently needed to address the potential risks and repercussions for children, youth, adults, parents, families and the community.

Christine Grové

Thankfully, thanks to an unprecedented government intervention it seems most people in the UK don't need to worry about being out on the streets. They're covered in some way. Meanwhile, the Spanish government is apparently planning to roll out basic income, not temporarily, but in a way "that stays forever, that becomes a structural instrument, a permanent instrument".

We're all familiar with Maslow's hierarchy of needs as represented as a pyramid, but these days it tends to be represented in sociological research in a more dynamic way, with overlapping needs that can take precedence at any given time.

Dynamic hierarchy of needs
Dynamic hierarchy of needs (CC BY-SA
Philipp Guttmann)

I guess what I'm trying to say is that it's great that most people in developed countries are going to be able to have their safety needs met throughout this crisis. What's not certain is that psychological needs will be met, never mind those around belonging, esteem, and self-actualization.

That's because the short version of the problem with the world pre-pandemic is 'capitalism' but the slightly longer and more accurate version is 'neoliberal capitalism'. That modifier is an important one.

Writing in The Financial Times, author Arundhati Roy writes about India's response to the coronavirus. She explains how it could be a great leveller:

Whatever it is, coronavirus has made the mighty kneel and brought the world to a halt like nothing else could. Our minds are still racing back and forth, longing for a return to “normality”, trying to stitch our future to our past and refusing to acknowledge the rupture. But the rupture exists. And in the midst of this terrible despair, it offers us a chance to rethink the doomsday machine we have built for ourselves. Nothing could be worse than a return to normality.

Arundhati Roy (The Financial Times)

Normality for too many people in this world is predicated on a logic that enriches a very small number of people while hollowing-out the world for the 99%. This is done through markets and competition being introduced to every area of life, so that 'success' or 'failure' in life is reduced to an individual's responsibility.

Shaun the Sheep

Under such conditions, neoliberal societies are geared towards short-termism, as evidenced by our woeful response to the dangers of climate change. As Dark Matter Labs put it:

Our underlying structural capacities and incentives are deeply coded to advance short-term thinking and decision-making. This fundamental societal deficit in future-oriented thinking, permeates our psychological, cultural, technological, legal, financial and political infrastructures—amplifying a bias towards the present—resulting in short-sighted and vulnerable subjects, short-term financial investments, waste economies and a growing political fracture between intergenerational relations.

Dark Matter Labs

This is an unprecedented opportunity for societies to change track and to get off the neoliberal rails. One way of doing that is to use tools to think about the potential impact of the changes we're experiencing. Only then can we think about potential solutions that benefit the many instead of the few.

In a preview for a new book coming out soon, Scott Smith explains a simple technique to map impacts and implications:

Source: How to Future, Changeist, 2020.

He gives the example of the majority of people in 'professional' occupations now working from home. What are the first, second, and third level impacts? What kind of impacts are they?

Image via Scott Smith (S=social, T=tech, E=economic, P=political/legal, V=values)

Some of these are positive impacts, some negative, and some neutral. Some have individual effects, some are felt at the organisational or societal level. Either way, now is probably a good time to be thinking about a new venture that will both help people and be profitable in the post-pandemic landscape.


One thing we've taken for granted over the last couple of decades is that everything is manufactured in China. However, Matt Webb has been reading the runes and thinking about this:

The hegemony of manufacturing in China is assumed. But my feeling is that the threshold between centralised and local is a fine line, and it's closer than it looks.

I was reading recently about loo paper, because of course I was. Apparently it's always made close to the place of sale because it's cheap and not very dense and so disproportionately expensive to ship. So where else are these fine lines, and how quickly could we tip over them?

Matt Webb

We no longer live in a world where there are defined groups of people that neatly fit previous pre-conceived media groups. I can remember reading about DINKYs (Dual Income No Kids Yet) back when I was doing Media Studies as a GCSE student. The world has moved on.

But now we've got micro-targeted advertising and e-commerce. It's absurd to stock physical stores with items that probably won't be bought, just to make a particular size and colour available. And there's no ABC1 sociodemographic group now, people form their own communities. You can launch a micro-brand on Instagram in an instant (and either keep it niche or scale it to billions). Where's the requirement for mass anything? The logic collapses.

So maybe the logic supporting centralised supply chains has collapsed too.

Matt Webb

There's many people coming together to think through the implications of the coronavirus and what a post-pandemic landscape could (or should) look like for them and their sector. One I found particularly illuminating was on Subpixel Space, where Toby Shorin had a chat with his friends and shared the result.

Bugs Bunny and Daffy Duck

I don't agree with all of the predictions, but a few really jumped out at me. For example:

Media and content brands with membership models will likely do very well, as will games, both indie and platforms like Roblox. We’ll see more brands which do not hold any assets whatsoever, but are simply groupings of individuals giving themselves a name and a presence.

Toby Shorin, Drew Austin, Kara Kittel, Kei Kreutler, Edouard Urcades

I think this is already happening. For example, a few educators banded together to create the (now quite slick-looking) Higher Ed Learning Collective. This started with one guy sitting on his couch creating a Facebook group.

Given all of the digital tools at our disposal, there's no reason for people to wait in order to experiment, or even to gain financing for their idea. In fact, getting people in on the ground floor is a great way of sharing ownership of the project.

Building brands around shared ownership with customers will probably be increasingly important. Expect to see more crowdfunding, patronage, community, and membership-based go-to-market strategies which make ownership an explicit part of the brand experience. Several crypto-adjacent teams are exploring this territory already.

Toby Shorin, Drew Austin, Kara Kittel, Kei Kreutler, Edouard Urcades

We've spent the last decade living most of our social lives online out in the open. That's becoming less and less tenable now that pretty much everyone is online. We're collectively looking for smaller spaces to share ideas with people who will read us in the right way.

There will need to be new types of interface and digital social environment to support the continued proliferation of lifestyles. We’ll probably see a flourishing of new, social micro-networks. They will not be for everyone. They will be private in nature, and will support between 20 and 1000 people.

Toby Shorin, Drew Austin, Kara Kittel, Kei Kreutler, Edouard Urcades

Although life may feel a bit boring and repetitive right now, we're in a period of time where the scale is about to tip. The thing is, we're not just not sure which way.

Scales

Although it's difficult, especially when we're feeling anxious, or lonely, or uncertain, now is the time to band together with like-minded people and to create the future we want to inhabit. Let's be the change we want to see in the world.


Enjoy this? Sign up for the weekly roundup, become a supporter, or download Thought Shrapnel Vol.1: Personal Productivity!


Quotation-as title from George Eliot. Header image by Martin Widenka.

Friday forebodings

I think it's alright to say that this was a week when my spirits dropped a little. Apologies if that's not what you wanted to hear right now, and if it's reflected in what follows.

For there to be good things there must also be bad. For there to be joy there must also be sorrow. And for there to be hope there must be despair. All of this will pass.


We’re Finding Out How Small Our Lives Really Are

But there’s no reason to put too sunny a spin on what’s happening. Research has shown that anticipation can be a linchpin of well-being and that looking ahead produces more intense emotions than retrospection. In a 2012 New York Times article on why people thirst for new experiences, one psychologist told the paper, “Novelty-seeking is one of the traits that keeps you healthy and happy and fosters personality growth as you age,” and another referred to human beings as a “neophilic species.” Of course, the current blankness in the place of what comes next is supposed to be temporary. Even so, lacking an ability to confidently say “see you later” is going to have its effects. Have you noticed the way in which conversations in this era can quickly become recursive? You talk about the virus. Or you talk about what you did together long ago. The interactions don’t always spark and generate as easily as they once did.

Spencer Kornhaber (The Atlantic)

Part of the problem with all of this is that we don't know how long it's going to last, so we can't really make plans. It's like an extended limbo where you're supposed to just get on with it, whatever 'it' is...


Career Moats in a Recession

If you're going after a career moat now, remember that the best skills to go after are the ones that the market will value after the recession ends. You can’t necessarily predict this — the world is complex and the future is uncertain, but you should certainly keep the general idea in mind.

A simpler version of this is to go after complementary skills to your current role. If you've been working for a bit, it's likely that you'll have a better understanding of your industry than most. So ask yourself: what complementary skills would make you more valuable to the employers in your job market?

Cedric James (Commonplace)

I'm fortunate to have switched from education to edtech at the right time. Elsewhere, James says that "job security is the ability to get your next job, not keep your current one" and that this depends on your network, luck, and having "rare and valuable skills". Indeed.


Everything Is Innovative When You Ignore the Past

This is hard stuff, and acknowledging it comes with a corollary: We, as a society, are not particularly special. Vinsel, the historian at Virginia Tech, cautioned against “digital exceptionalism,” or the idea that everything is different now that the silicon chip has been harnessed for the controlled movement of electrons.

It’s a difficult thing for people to accept, especially those who have spent their lives building those chips or the software they run. “Just on a psychological level,” Vinsel said, “people want to live in an exciting moment. Students want to believe they’re part of a generation that’s going to change the world through digital technology or whatever.”

Aaron Gordon (VICE)

Everyone thinks they live in 'unprecedented' times, especially if they work in tech.


‘We can’t go back to normal’: how will coronavirus change the world?

But disasters and emergencies do not just throw light on the world as it is. They also rip open the fabric of normality. Through the hole that opens up, we glimpse possibilities of other worlds. Some thinkers who study disasters focus more on all that might go wrong. Others are more optimistic, framing crises not just in terms of what is lost but also what might be gained. Every disaster is different, of course, and it’s never just one or the other: loss and gain always coexist. Only in hindsight will the contours of the new world we’re entering become clear.

Peter C Baker (the Guardian)

An interesting read, outlining the optimistic and pessimistic scenarios. The coronavirus pandemic is a crisis, but of course what comes next (CLIMATE CHANGE) is even bigger.


The Terrible Impulse To Rally Around Bad Leaders In A Crisis

This tendency to rally around even incompetent leaders makes one despair for humanity. The correct response in all cases is contempt and an attempt, if possible, at removal of the corrupt and venal people in charge. Certainly no one should be approving of the terrible jobs they [Cuomo, Trump, Johnson] have done.

All three have or will use their increased power to do horrible things. The Coronavirus bailout bill passed by Congress and approved by Trump is a huge bailout of the rich, with crumbs for the poor and middle class. So little, in fact, that there may be widespread hunger soon. Cuomo is pushing forward with his cuts, and I’m sure Johnson will live down to expectations.

Ian Welsh

I'm genuinely shocked that the current UK government's approval ratings are so high. Yes, they're covering 80% of the salary of those laid-off, but the TUC was pushing for an even higher figure. It's like we're congratulating neoliberal idiots for destroying our collectively ability to be able to respond to this crisis effectively.


As Coronavirus Surveillance Escalates, Personal Privacy Plummets

Yet ratcheting up surveillance to combat the pandemic now could permanently open the doors to more invasive forms of snooping later. It is a lesson Americans learned after the terrorist attacks of Sept. 11, 2001, civil liberties experts say.

Nearly two decades later, law enforcement agencies have access to higher-powered surveillance systems, like fine-grained location tracking and facial recognition — technologies that may be repurposed to further political agendas like anti-immigration policies. Civil liberties experts warn that the public has little recourse to challenge these digital exercises of state power.

Natasha Singer and Choe Sang-Hun (The New York Times)

I've seen a lot of suggestions around smarpthone tracking to help with the pandemic response. How, exactly, when it's trivial to spoof your location? It's just more surveillance by the back door.


How to Resolve Any Conflict in Your Team

Have you ever noticed that when you argue with someone smart, if you manage to debunk their initial reasoning, they just shift to a new, logical-sounding reason?

Reasons are like a salamander’s legs — if you cut one off, another grows in its place.

When you’re dealing with a salamander, you need to get to the heart. Forget about reasoning and focus on what’s causing the emotions. According to [non-violent communication], every negative emotion is the result of an unmet, universal need.

Dave bailey

Great advice here, especially for those who work in organisations (or who have clients) who lack emotional intelligence.


2026 – the year of the face to face pivot

When the current crisis is over in terms of infection, the social and economic impact will be felt for a long time. One such hangover is likely to be the shift to online for so much of work and interaction. As the cartoon goes “these meetings could’ve been emails all along”. So let’s jump forward then a few years when online is the norm.

Martin Weller (The Ed Techie)

Some of the examples given in this post gave me a much-needed chuckle.


Now's the time – 15 epic video games for the socially isolated

However, now that many of us are finding we have time on our hands, it could be the opportunity we need to attempt some of the more chronologically demanding narrative video game masterpieces of the last decade.

Keith Stuart (The Guardian)

Well, yes, but what we probably need even more is multiplayer mode. Red Dead Redemption II is on this list, and it's one of the best games ever made. However, it's tinged with huge sadness for me as it's a game I greatly enjoyed playing with the late, great, Dai Barnes.


Enjoy this? Sign up for the weekly roundup, become a supporter, or download Thought Shrapnel Vol.1: Personal Productivity!


Header image by Alex Fu

We have it in our power to begin the world over again

UBI, GDP, and Libertarian Municipalism

It's sobering to think that, in years to come, historians will probably refer to the 75 years between the end of the Second World War and the start of this period we've just begun with a single name.

Whatever we end up calling it, one thing is for sure: what comes next can't be a continuation of what went before. We need a sharp division of life pre- and post-pandemic.

That's because our societies have been increasingly unequal since 2008, when the global financial crisis meant that the rich consolidated their position while the rest of us paid for the mistakes of bankers and the global elite.

Image via Oxfam

So what can we do about this? What should we be demanding once we're allowed back out of our houses? What should we organise against?

I've been a proponent of Universal Basic Income over the last few years, but, I have to say that the closer it comes to being a reality, the more concerns I have about its implementation. Even if it's brought in by a left-leaning government, there's still the danger that it's subsequently used as a weapon against the poor by a new adminstration.

That's why I was interested in this section from a book I'm reading at the moment. Writing in Future Histories, Lizzie O'Shea suggests that we need to think beyond UBI to include other approaches:

Alongside this, we need to consider how productive, waged work could be more democratically organized to meet the needs of society rather than individual companies. To this end, one commonly suggested alternative to a basic income is a job guarantee. The idea is that the government offers a job to anyone who wants one and is able to work, in exchange for a minimum wage. Jobs could be created around infrastructure projects, for example, or care work. Government spending on this enlarged public sector world act like a kind of Keynesian expenditure, to stimulate the economy and buffer the population against the volatility of the private labor market. Modeling suggests that this would be more cost-effective than a basic income (often critiqued for being too expensive) and avoid many of the inflationary perils that might accompany basic income proposals. It could also be used to jump-start sections of the economy that are politically important, like green energy, carbon reduction and infrastructure. A job guarantee could help us collectively decide what kind of work is must urgent and necessary and to prioritize that through democratically accountable representatives.

Lizzie O'Shea, Future Histories

Of course, as she points out, there are a number of drawbacks to a job guarantee scheme:

  • Reinforcement of the connection between productivity and human value
  • Creation of 'bullshit jobs'
  • Could deny people chance to engage in other pursuits (if poorly implemented)
  • Potential to leave behind prior who cannot work (disability / other health concerns)
  • Seems didactic and disciplinary

Nevertheless, O'Shea believes that a combination of a job guarantee, UBI, and government-provided services is the way forward:

Ultimately, we need a combination of these programs. We need the liberty offered by a basic income, the sustainability promised by the organization of a job guarantee, and the protection of dignity offered by centrally planned essential services. It is like a New Deal for the age of automation, a ground rent for the digital revolution, in which the benefits of accelerated productive capacity are shared among everyone. From each according to his ability, to each according to their need - a twenty-first-century vision of socialism. "We have it in our power to begin the world over again," wrote Thomas Paine in an appendix to Common Sense, just before one of the most revolutionary periods in human history. We have a similar opportunity today.

Lizzie O'Shea, Future Histories

While I don't disagree that we will continue to need "the protection of dignity offered by centrally planned essential services," I'm not so sure that giving the state so much power over our lives is a good thing. I think this approach papers over the cracks of neoliberalism, giving billionaires and capitalists a get-out-of-jail-free card.

Instead, I'd like to see a post-pandemic breakup of mega corporations. While a de jure limit on how much one individual or one organisation can be worth is likely to be unworkable, there's ways we can make de facto limits on this a reality.

People respond to incentives, including how easy or hard it is to do something. I know from experience how easy it is to set up a straightforward limited company in the UK and how difficult it is to set up a co-operative. To get to where we need to be, we need to ensure collective decision-making is the norm within organisations owned by workers. And then these worker-owned organisations need to co-ordinate for the good of everyone.

I'm a huge believer in decentralisation, not just technologically but politically and socially; we don't need governments, billionaires, or celebrities telling us what to do with our lives. We need to think wider and deeper. My current thinking aligns with this section on libertarian municipalism from the Wikipedia page on the political philosopher Murray Bookchin:

Libertarian Municipalism constitutes the politics of social ecology, a revolutionary effort in which freedom is given institutional form in public assemblies that become decision-making bodies.

Wikipedia

...or, in other words:

The overriding problem is to change the structure of society so that people gain power. The best arena to do that is the municipality—the city, town, and village—where we have an opportunity to create a face-to-face democracy.

Wikipedia

Some people think that, in these days of super-fast connections to anyone on the planet, that nation states are dead, and that we should be building communities on the blockchain. I have yet to see a proposal of how this would be workable in practice; everything I've seen so far extrapolates naïvely from what's technically possible to what should be socially desirable.

Yes, we can and should have solidarity with people around the world with whom we work and socialise. But this does not negate the importance of decision-making at a local level. Gaming clans don't yet do bin collections, and colleagues in a different country can't fix the corruption riddling your local government.

Ultimately, then, we're going to need a whole new politics and social contract after the pandemic. I sincerely hope we manage to grasp the nettle and do something radically different. I'm not sure how we'll all survive if the rich, once again, come out of all this even richer than before.


BONUS: check out this 1978 speech from Murray Bookchin where he calls for utopia, not futurism.


Enjoy this? Sign up for the weekly roundup, become a supporter, or download Thought Shrapnel Vol.1: Personal Productivity!


Quotation-as-title from Thomas Paine. Header image by Stas Knop.

Friday flickerings

I've tried to include some links here to other things here, but just like all roads read to Rome, all links eventually point to the pandemic.

I hope you and people that you care about are well. Stay safe, stay indoors, and let me know which of the following resonate with you!


Supermensch

Our stories about where inventiveness comes from, and how the future will be made, overwhelmingly focus on the power of the individual. Such stories appeal to the desire for human perfection (and redemption?) recast in technological language, and they were integral to the way that late-19th-century inventor-entrepreneurs, such as Tesla or Thomas Edison, presented themselves to their publics. They’re still very much part of the narrative of technological entrepreneurism now. Just as Tesla wanted to be seen as a kind of superhero of invention, unbound by conventional restraints, so too do his contemporary admirers at the cutting edge of the tech world. Superheroes resonate within that culture precisely because they embody in themselves the perception of technology as something that belongs to powerful and iconoclastic individuals. They epitomise the idea that technological culture is driven by outsiders. The character of Iron Man makes this very clear: after all, he really is a tech entrepreneur, his superpowers the product of the enhanced body armour he wears.

Iwan Rhys Morus (Aeon)

A really interesting read about the link between individualism, superheroes, technology, and innovation.


The Second Golden Age of Blogging

Blogging was then diffused into social media, but now social media is so tribal and algo-regulated that anybody with a real message today needs their own property. At the same time, professional institutions are increasingly suffocated by older, rent-seeking incumbents and politically-correct upstarts using moralism as a career strategy. In such a context, blogging — if it is intelligent, courageous, and consistent — is currently one of the most reliable methods for intellectually sophisticated individuals to accrue social and cultural capital outside of institutions. (Youtube for the videographic, Instagram for the photographic, podcasting for the loquacious, but writing and therefore blogging for the most intellectually sophisticated.)

Justin Murphy (Other LIfe)

I've been blogging since around 2004, so for sixteen years, and through all of my career to date. It's the best and most enjoyable thing about 'work'.


NASA Fixes Mars Lander By Telling It to Hit Itself With a Shovel

NASA expected its probe, dubbed “the mole,” to dig its way through sand-like terrain. But because the Martian soil clumped together, the whole apparatus got stuck in place.

Programming InSight’s robotic arm to land down on the mole was a risky, last-resort maneuver, PopSci reports, because it risked damaging fragile power and communication lines that attached nearby. Thankfully, engineers spent a few months practicing in simulations before they made a real attempt.

Dan Robitzski (Futurism)

The idea of NASA engineers sending a signal to a distant probe to get it to hit itself, in the midst of a crisis on earth, made me chuckle this week.


Act as if You’re Really There

Don’t turn your office into a generic TV backdrop. Video is boring enough. The more you remove from the frame, the less visual data you are providing about who you are, where you live, how you work, and what you care about. If you were watching a remote interview with, say, Bong Joon-ho (the South Korean director of Parasite) would you want him sitting on a blank set with a ficus plant? Of course not. You would want to see him in his real office or studio. What are the posters on his wall? The books on his shelf? Who are his influences?

Douglas Rushkoff (OneZero)

Useful advice in this post from Douglas Rushkoff. I appreciate his reflection that, "every pixel is a chance to share information about your process and proclivities."


People Are Looping Videos to Fake Paying Attention in Zoom Meetings

On Twitter, people are finding ways to use the Zoom Rooms custom background feature to slap an image of themselves in their frames. You can record a short, looping video as your background, or take a photo of yourself looking particularly attentive, depending on the level of believability you're going for. Zoom says it isn't using any kind of video or audio analysis to track attention, so this is mostly for your human coworkers and boss' sake. With one of these images on your background, you're free to leave your seat and go make a sandwich while your boss thinks you're still there paying attention:

Samantha Cole (Vice)

As an amusing counterpoint to the above article, I find it funny that people are using video backgrounds in this way!


A Guide to Hosting Virtual Events with Zoom

There are lots of virtual event tools out there, like Google Hangouts, YouTube Live, Vimeo Live. For this guide I’ll delve into how to use Zoom specifically. However, a lot of the best practices explored here are broadly applicable to other tools. My goal is that reading this document will give you all the tools you need to be able to set up a meeting and host it on Zoom (or other platforms) in fun and interactive ways.

Alexa Kutler (Google Docs)

This is an incredible 28-page document that explains how to set up Zoom meetings for success. Highly recommended!


The rise of the bio-surveillance state

Elements of Asia’s bio-surveillance revolution may not be as far off as citizens of Western democracies assume. On 24 March an emergency bill, which would relax limits on urgent surveillance warrants, went before the House of Lords. In any case, Britain’s existing Investigatory Powers Act already allows the state to seize mobile data if national security justifies it. In another sign that a new era in data rights is dawning, the EU is reviewing its recent white paper on AI regulation and delaying a review of online privacy rules. Researchers in both Britain (Oxford) and the US (MIT) are developing virus-tracking apps inviting citizens to provide movement data voluntarily. How desperate would the search for “needles in haystacks” have to get for governments to make such submissions compulsory? Israel’s draconian new regulations – which allegedly include tapping phone cameras and microphones – show how far down this road even broadly Western democracies might go to save lives and economies.

Jeremy Cliffe (New Statesman)

We need urgent and immediate action around the current criss. But we also need safeguards and failsafes so that we don't end up with post-pandemic authoritarian regimes.


The economy v our lives? It's a false choice – and a deeply stupid one

Soon enough, as hospitals around the world overflow with coronavirus patients, exhausting doctors, nurses, orderlies, custodians, medical supplies, ventilators and hospital cash accounts, doctors will have to make moral choices about who lives or dies. We should not supersede their judgment based on a false choice. Economic depression will come, regardless of how many we let die. The question is how long and devastating it will be.

Siva Vaidhyanathan (The Guardian)

Not exactly a fun read, but the truth is the world's economy is shafted no matter which way we look at it. And as I tweeted the other day, there's no real thing that exists, objectively speaking called 'the economy' which is separate from human relationships.


How the Pandemic Will End

Pandemics can also catalyze social change. People, businesses, and institutions have been remarkably quick to adopt or call for practices that they might once have dragged their heels on, including working from home, conference-calling to accommodate people with disabilities, proper sick leave, and flexible child-care arrangements. “This is the first time in my lifetime that I’ve heard someone say, ‘Oh, if you’re sick, stay home,’” says Adia Benton, an anthropologist at Northwestern University. Perhaps the nation will learn that preparedness isn’t just about masks, vaccines, and tests, but also about fair labor policies and a stable and equal health-care system. Perhaps it will appreciate that health-care workers and public-health specialists compose America’s social immune system, and that this system has been suppressed.

Ed Yong (The Atlantic)

Much of this is a bit depressing, but I've picked up on the more positive bit towards the end. See also the article I wrote earlier this week: People seem not to see that their opinion of the world is also a confession of character


Enjoy this? Sign up for the weekly roundup, become a supporter, or download Thought Shrapnel Vol.1: Personal Productivity!


Header image by Sincerely Media.

People seem not to see that their opinion of the world is also a confession of character

Actions, reactions, and what comes next

We are, I would suggest, in a period of collective shock due to the pandemic. Of course, some people are better at dealing with these kinds of things than others. I'm not medically trained, but I'm pretty sure some of this comes down to genetics; it's probably something to do with the production of cortisol.

It might a little simplistic to separate people into those who are good in a crisis and those who aren't. It's got to be more complex than that. What if some people, despite their genetic predisposition, have performed some deliberate practice in terms of how they react to events and other things around them?

I often say to my kids that it's not your actions that mark you out as a person, but your reactions. After all, anyone can put on a 'mask' and affect an air of nonchalance and sophistication. But that mask can slip in a crisis. To mix metaphors, people lose control when they reach the end of their tether, and are at their most emotionally vulnerable and unguarded when things go wrong. This is when we see their true colours.

A few years ago, when I joined Moodle, I flew to Australia and we did some management bonding stuff and exercises. One of them was about the way that you operate in normal circumstances, and the way that you operate under pressure. Like most people, I tended to get more authoritarian in a crisis.

What we're seeing in this crisis, I think, are people's true colours. The things they're talking about the most and wanting to protect are the equivalent of them item they'd pull from a burning building. What do they want to protect from the coronavirus? Is it the economy? Is it their family? Is it freedom of speech?


Last week, I asked Thought Shrapnel supporters what I should write about. It was suggested that I focus on something beyond the "reaction and hyperaction" that's going on, and engage in "a little futurism and hope". Now that it's no longer easier to imagine the end of the world as the end of capitalism, how do we prepare for what comes next?

It's an interesting suggestion for a thought experiment. Before we go any further, though, I want to preface this by saying these are the ramblings of an incoherent fool. Don't make any investment decisions, buy any new clothes, or sever any relationships based on what I've got to say. After all, at this point, I'm mostly for rhetorical effect.


The first and obvious thing that I think will happen as a result of the pandemic is that people will get sick and some will die. Pretty much everyone on earth will either lose someone close to them or know someone who has. Death, as it has done for much of human history, will stalk us, and be something we are forced to both confront and talk about.

This may not seem like a very cheerful and hopeful place to start, but, actually, not being afraid to die seems to be the first step in living a fulfilling life. As I've said before, quoting it is the child within us that trembles before death. Coming to terms with that fact that you and the people you love are going to die at some point is just accepting the obvious.


If we don't act like we're going to live forever, if we confront our mortal condition, then it forces us to make some choices, both individually and as a society. How do we care for people who are sick and dying? How should we support those who are out of work? What kind of education do we want for our kids?

I forsee a lot of basic questions being re-asked and many assumptions re-evaluated in the light of the pandemic. Individually, in communities, and as societies, we'll look back and wonder why it was that companies making billions of dollars when everything was fine were all of a sudden unable to meet their financial obligations when things weren't going so well. We'll realise that, at root, the neoliberalist form of capitalism we've been drinking like kool-aid actually takes from the many and gives to the few.

Before the pandemic, we had dead metaphors for both socialism and "pulling together in times of adversity". Socialism has been unfairly caricatured as, and equated with, the totalitarian communist experiment in Russia. Meanwhile, neoliberals have done a great job at equating adversity with austerity, invoking memories of life during WWII. Keep Calm and Carry On.

This is why, in the aftermath of the 2008 financial crash, despite the giant strides and inroads into our collective consciousness, made by the Occupy movement, it ultimately failed. When it came down to brass tacks, we were frightened that destroying our current version of capitalism would mean we'd be left with totalitarian communism: queuing for food, spying on your neighbours, and suchlike.

So instead we invoked the only "pulling together in times of adversity" meme we knew: austerity. Unfortunately, that played straight into the hands of those who were happy to hollow out civic society for financial gain.

Post-pandemic, as we're rebuilding society, I think that not only will there be fewer old people (grim, but true) but the overall shock will move the Overton Window further to left than it has been previously. Those who remain are likely to be much more receptive to the kind of socialism that would make things like Universal Basic Income and radically decarbonising the planet into a reality.


Making predictions about politics is a lot easier than making predictions about technology. That's for a number of reasons, including how quickly the latter moves compared to the former, and also because of the compound effect that different technologies can have on society.

For example, look at the huge changes in the last decade around smartphones now being something that people spend several hours using each day. A decade ago we were concerned about people's access to any form of internet-enabled device. Now, we just assume that everyone's gone one which they can use to connect during the pandemic.

What concerns me is that the past decade has seen not only the hollowing-out of civic society in western democracies, but also our capitulation to venture capital-backed apps that make our lives easier. The reason? They're all centralised.

I'm certainly not denying that some of this is going to make our life much easier short-term. Being on lockdown and still being able to have Amazon deliver almost anything to me is incredible. As is streaming all of the things via Netflix, etc. But, ultimately, caring doesn't scale, and scaling doesn't care.

Right now, we relying on centralised technologies. Everywhere I look, people are using a apps, tools, and platforms that could go down at any time. Remember the Twitter fail whale?

The Twitter 'fail whale'

What happens when that scenario happens with Zoom? Or Microsoft Teams? Or Slack, or any kind of service that relies on the one organisation having their shit together for an extended period of time during a pandemic?

I think we're going to see outages or other degradations in service. I'm hoping that this will encourage people to experiment with other, decentralised platforms, rather than leap from the frying pan of one failed centralised service into the fire another.


In terms of education, I don't think it's that difficult to predict what comes next. While I could be spectacularly wrong, the longer kids are kept at home and away from school, the more online teaching and learning has to become something mainstream.

Then, when it's time to go back to school, some kids won't. They and their parents will realise that they don't need to, or that they are happier, or have learned more staying at home. Not all, by any means, but a significant majority. And because everyone has been in the same boat, parents will have peer support in doing so.

The longer the pandemic lockdown goes on, the more educational institutions will have to think about the logistics and feasibility of online testing. I'd like to think that competency-based learning and stackable digital credentials like Open Badges will become the norm.

Further out, as young people affected by the pandemic lockdown enter the job market, I'd hope that they would reject the traditional CV or resume as something that represents their experiences. Instead, although it's more time-consuming to look at, I'd hope for portfolio-based approaches (with verified digital credentials) to become standard.


Education isn't just about, or even mainly about, getting a job. So what about the impact of the pandemic on learners? On teachers? Well, if I'm being optimistic and hopeful, I'd say that it shows that things can be done differently at scale.

NASA Earth Observatory images showing emissions dramatically reduced over China during the coronavirus outbreak (via CBS)

In the same way that climate change-causing emissions dropped dramatically in China and other countries during the enforced coronavirus lockdown, so we can get rid of the things we know are harmful in education.

High-stakes testing? We don't need it. Kids being taught in classes of 30+ by a low-paid teacher? Get over it. Segregation between rich and poor through private education? Reject it.


All of this depends on how we respond to the 'shock and awe' of both the pandemic and its response. We're living during a crisis when it's almost certainly necessary to bring in the kind of authoritarian measures we'd reject at any other time. While we need to move quickly, we still need to subject legislation and new social norms to some kind of scrutiny.

This period in history provides us with a huge opportunity. When I was a History teacher, one of my favourite things to teach kids was about revolutions; about times when people took things into their own hands. There's the obvious examples, for sure, like 1789 and the French Revolution.

But perhaps my absolute favourite was for them to discover what happened after the Black Death ravaged Europe in particular in the 14th century. Unable to find enough workers to work their land, lords had to pay peasants several times what they could have previously expected. In fact, it led to the end of the entire feudal system.

We have the power to achieve something similar here. Except instead of serfdom, the thing we can escape from his neoliberal capitalism, the idea that the poor should suffer for the enrichment of the elite. We can and should structure our society so that never happens again.

In other words, never waste a crisis. What are you doing to help the revolution? Remember, when it comes down to it, power is always taken, never freely given.


BONUS: after writing this, I listened to a recent a16z podcast on Remote Work and Our New Reality. Worth a listen!


Enjoy this? Sign up for the weekly roundup, become a supporter, or download Thought Shrapnel Vol.1: Personal Productivity!


Quotation-as-title by Ralph Waldo Emerson. Header image by Ana Flávia.

Friday fumings

My bet is that you've spent most of this week reading news about the global pandemic. Me too. That's why I decided to ensure it's not mentioned at all in this week's link roundup!

Let me know what resonates with you... 😷


Finding comfort in the chaos: How Cory Doctorow learned to write from literally anywhere

My writing epiphany — which arrived decades into my writing career — was that even though there were days when the writing felt unbearably awful, and some when it felt like I was mainlining some kind of powdered genius and sweating it out through my fingertips, there was no relation between the way I felt about the words I was writing and their objective quality, assessed in the cold light of day at a safe distance from the day I wrote them. The biggest predictor of how I felt about my writing was how I felt about me. If I was stressed, underslept, insecure, sad, hungry or hungover, my writing felt terrible. If I was brimming over with joy, the writing felt brilliant.

Cory Doctorow (CBC)

Such great advice in here from the prolific Cory Doctorow. Not only is he a great writer, he's a great speaker, too. I think both come from practice and clarity of thought.


Slower News

Trends, micro-trends & edge cases.

This is a site that specialises in important and interesting news that is updated regularly, but not on an hour-by-hour (or even daily) basis. A wonderful antidote to staring at your social media feed for updates!


SCARF: The 5 key ingredients for psychological safety in your team

There’s actually a mountain of compelling evidence that the single most important ingredient for healthy, high-performing teams is simple: it’s trust. When Google famously crunched the data on hundreds of high-performing teams, they were surprised to find that one variable mattered more than any other: “emotional safety.” Also known as: “psychological security.” Also known as: trust.

Matt Thompson

I used to work with Matt at Mozilla, and he's a pretty great person to work alongside. He's got a book coming out this year, and Laura (another former Mozilla colleague, but also a current co-op colleague!) drew my attention to this.


I Illustrated National Parks In America Based On Their Worst Review And I Hope They Will Make You Laugh (16 Pics)

I'm an illustrator and I have always had a personal goal to draw all 62 US National Parks, but I wanted to find a unique twist for the project. When I found that there are one-star reviews for every single park, the idea for Subpar Parks was born. For each park, I hand-letter a line from the one-star reviews alongside my illustration of each park as my way of putting a fun and beautiful twist on the negativity.

Amber Share (Bored Panda)

I love this, especially as the illustrations are so beautiful and the comments so banal.


What Does a Screen Do?

We know, for instance, that smartphone use is associated with depression in teens. Smartphone use certainly could be the culprit, but it’s also possible the story is more complicated; perhaps the causal relationship works the other way around, and depression drives teenagers to spend more time on their devices. Or, perhaps other details about their life—say, their family background or level of physical activity—affect both their mental health and their screen time. In short: Human behavior is messy, and measuring that behavior is even messier.

Jane C. Hu (Slate)

This, via Ian O'Byrne, is a useful read for anyone who deals with kids, especially teenagers.


13 reads to save for later: An open organization roundup

For months, writers have been showering us with multiple, ongoing series of articles, all focused on different dimensions of open organizational theory and practice. That's led to to a real embarrassment of riches—so many great pieces, so little time to catch them all.

So let's take moment to reflect. If you missed one (or several) now's your chance to catch up.

Bryan Behrenshausen (Opensource.com)

I've already shared some of the articles in this roundup, but I encourage you to check out the rest, and subscribe to opensource.com. It's a great source of information and guidance.


It Doesn’t Matter If Anyone Exists or Not

Capitalism has always transformed people into latent resources, whether as labor to exploit for making products or as consumers to devour those products. But now, online services make ordinary people enact both roles: Twitter or Instagram followers for conversion into scrap income for an influencer side hustle; Facebook likes transformed into News Feed-delivery refinements; Tinder swipes that avoid the nuisance of the casual encounters that previously fueled urban delight. Every profile pic becomes a passerby—no need for an encounter, even.

Ian Bogost (The Atlantic)

An amazing piece of writing, in which Ian Bogost not only surveys previous experiences with 'strangers' but applies it to the internet. As he points out, there is a huge convenience factor in not knowing who made your sandwich. I've pointed out before that capitalism is all about scale, and at the end of the day, caring doesn't scale, and scaling doesn't care.


You don't want quality time, you want garbage time

We desire quality moments and to make quality memories. It's tempting to think that we can create quality time just by designating it so, such as via a vacation. That generally ends up backfiring due to our raised expectations being let down by reality. If we expect that our vacation is going to be perfect, any single mistake ruins the experience

In contrast, you are likely to get a positive surprise when you have low expectations, which is likely the case during a "normal day". It’s hard to match perfection, and easy to beat normal. Because of this, it's more likely quality moments come out of chance

If you can't engineer quality time, and it's more a matter of random events, it follows that you want to increase how often such events happen. You can't increase the probability, but you can increase the duration for such events to occur. Put another way, you want to increase quantity of time, and not engineer quality time.

Leon Lin (Avoid boring people)

There's a lot of other interesting-but-irrelevant things in this newsletter, so scroll to the bottom for the juicy bit. I've quoted the most pertinent point, which I definitely agree with. There's wisdom in Gramsci's quotation about having "pessimism of the intellect, optimism of the will".


The Prodigal Techbro

The prodigal tech bro doesn’t want structural change. He is reassurance, not revolution. He’s invested in the status quo, if we can only restore the founders’ purity of intent. Sure, we got some things wrong, he says, but that’s because we were over-optimistic / moved too fast / have a growth mindset. Just put the engineers back in charge / refocus on the original mission / get marketing out of the c-suite. Government “needs to step up”, but just enough to level the playing field / tweak the incentives. Because the prodigal techbro is a moderate, centrist, regular guy. Dammit, he’s a Democrat. Those others who said years ago what he’s telling you right now? They’re troublemakers, disgruntled outsiders obsessed with scandal and grievance. He gets why you ignored them. Hey, he did, too. He knows you want to fix this stuff. But it’s complicated. It needs nuance. He knows you’ll listen to him. Dude, he’s just like you…

Maria Farrell (The Conversationalist)

Now that we're experiencing something of a 'techlash' it's unsurprising that those who created surveillance capitalism have had a 'road to Damascus' experience. That doesn't mean, as Maria Farrell points out, that we should all of a sudden consider them to be moral authorities.


Enjoy this? Sign up for the weekly roundup, become a supporter, or download Thought Shrapnel Vol.1: Personal Productivity!

We are too busy mopping the floor to turn off the faucet

Pandemics, remote work, and global phase shifts


Last week, I tweeted this:

I delete my tweets automatically every 30 days, hence the screenshot...

I get the feeling that, between film and TV shows on Netflix, Amazon deliveries, and social interaction on Twitter and Mastodon, beyond close friends and family, no-one would even realise if I'd been quarantined.


Writing in The Atlantic, Ian Bogost points out that Every Place Is the Same Now, because you go to every place with your personal screen, a digital portal to the wider world.

Anywhere has become as good as anywhere else. The office is a suitable place for tapping out emails, but so is the bed, or the toilet. You can watch television in the den—but also in the car, or at the coffee shop, turning those spaces into impromptu theaters. Grocery shopping can be done via an app while waiting for the kids’ recital to start. Habits like these compress time, but they also transform space. Nowhere feels especially remarkable, and every place adopts the pleasures and burdens of every other. It’s possible to do so much from home, so why leave at all?

Ian Bogost (The Atlantic)

If you're a knowledge worker, someone who deals with ideas and virtual objects rather than things in 'meatspace', then there is nothing tying you to a particular geographical place. This may be liberating, but it's also quite... weird.

It’s easy but disorienting, and it makes the home into a very strange space. Until the 20th century, one had to leave the house for almost anything: to work, to eat or shop, to entertain yourself, to see other people. For decades, a family might have a single radio, then a few radios and a single television set. The possibilities available outside the home were far greater than those within its walls. But now, it’s not merely possible to do almost anything from home—it’s also the easiest option. Our forebears’ problem has been inverted: Now home is a prison of convenience that we need special help to escape.

Ian Bogost (The Atlantic)

I've worked from home for the last eight years, and now can't imagine going back to working any other way. Granted, I get to travel pretty much every month, but that 95% being-at-home statistic still includes my multi-day international trips.


I haven't watched it recently, but in 2009 a film called Surrogates starring Bruce Willis foreshadowed the kind of world we're creating. Here's the synopsis via IMDB:

People are living their lives remotely from the safety of their own homes via robotic surrogates — sexy, physically perfect mechanical representations of themselves. It's an ideal world where crime, pain, fear and consequences don't exist. When the first murder in years jolts this utopia, FBI agent Greer discovers a vast conspiracy behind the surrogate phenomenon and must abandon his own surrogate, risking his life to unravel the mystery.

IMDB

If we replace the word 'robotic' with 'virtual' in this plot summary, then it's a close approximation to the world in which some of us now live. Facetuned Instagram selfies project a perfect life. We construct our own narratives and then believe the story we have concocted. Everything is amazing but no-one's happy.


Even Zoom, the videoconferencing software I use most days for work, has an option to smooth out wrinkles, change your background, and make everything look a bit more sparkly. Our offline lives can be gloriously mundane, but online, thanks to various digital effects, we can make them look glorious. And why wouldn't we?

I think we'll see people and businesses optimising for how they look and sound online, including recruitment. The ability to communicate effectively at a distance with people who you may never meet in person is a skill that's going to be in high demand, if it isn't already.


Remote working may be a trend, but one which is stubbornly resisted by some bosses who are convinced they have to keep a close eye on employees to get any work out of them.

However, when those bosses are forced to implement remote working policies to keep their businesses afloat, and nothing bad happens as a result, this attitude can, and probably will, change. Remote working, when done properly, is not only more cost-effective for businesses, but often leads to higher productivity and self-reported worker happiness.

Being 'good in the room' is fine, and I'm sure it will always be highly prized, but I also see confident, open working practices as something that's rising in perceived value. Chairing successful online meetings is at least as important as chairing ones offline, for example. We need to think of ways of being able recognise these remote working skills, as it's not something in which you can receive a diploma.


For workers, of course, there are so many benefits of working from home that I'm not even sure where to start. Your health, relationships, and happiness are just three things that are likely to dramatically improve when you start working remotely.

For example, let's just take the commute. This dominates the lives of non-remote workers, usually taking an hour or more out of a their day — every day. Commuting is tiring and inconvenient, but people are currently willing to put up with long commutes to afford a decently-sized house, or to live in a nicer area.

So, let's imagine that because of the current pandemic (which some are calling the world's biggest remote-working experiment) businesses decide that having their workers being based from home has multi-faceted benefits. What happens next?

Well, if a large percentage (say we got up to ~50%) of the working population started working remotely over the next few months and years, this would have a knock-on effect. We'd see changes in:

  • Schools
  • Volunteering
  • Offices
  • House prices
  • Community cohesion
  • High street
  • Home delivery

...to name but a few. I think it would be a huge net benefit for society, and hopefully allow for much greater civic engagement and democratic participation.


I'll conclude with a quotation from Nafeez Ahmed's excellent (long!) post on what he's calling a global phase shift. Medium says it's a 30-minute read, but I reckon it's about half that.

Ahmed points out in stark detail the crisis, potential future scenarios, and the opportunity we've got. I particularly appreciate his focus on the complete futility of what he calls "a raw, ‘fend for yourself’ approach". We must work together to solve the world's problems.

The coronavirus outbreak is, ultimately, a lesson in not just the inherent systemic fragilities in industrial civilization, but also the limits of its underlying paradigm. This is a paradigm premised on a specific theory of human nature, the neoclassical view of Homo-Economicus, human beings as dislocated units which compete with each other to maximise their material self-gratification through endless consumption and production. That paradigm and its values have brought us so far in our journey as a species, but they have long outlasted their usefulness and now threaten to undermine our societies, and even our survival as a species.

Getting through coronavirus will be an exercise not just in building societal resilience, but relearning the values of cooperation, compassion, generosity and kindness, and building systems which institutionalize these values. It is high time to recognize that such ethical values are not simply human constructs, products of socialization. They are cognitive categories which reflect patterns of behaviour in individuals and organizations that have an evolutionary, adaptive function. In the global phase shift, systems which fail to incorporate these values into their structures will eventually die.

Nafeez Ahmed

Just as crises can be manufactured by totalitarian regimes to seize power and control populations, perhaps natural crises can be used to make us collectively realise we need to pull together?


Enjoy this? Sign up for the weekly roundup, become a supporter, or download Thought Shrapnel Vol.1: Personal Productivity!


Header image by pan xiaozhen. Anonymous quotation-as-title taken from Scott Klososky's The Velocity Manifesto

Friday filchings

I'm having to write this ahead of time due to travel commitments. Still, there's the usual mixed bag of content in here, everything from digital credentials through to survival, with a bit of panpsychism thrown in for good measure.

Did any of these resonate with you? Let me know!


Competency Badges: the tail wagging the dog?

Recognition is from a certain point of view hyperlocal, and it is this hyperlocality that gives it its global value – not the other way around. The space of recognition is the community in which the competency is developed and activated. The recognition of a practitioner in a community is not reduced to those generally considered to belong to a “community of practice”, but to the intersection of multiple communities and practices, starting with the clients of these practices: the community of practice of chefs does not exist independently of the communities of their suppliers and clients. There is also a very strong link between individual recognition and that of the community to which the person is identified: shady notaries and politicians can bring discredit on an entire community.

Serge Ravet

As this roundup goes live I'll be at Open Belgium, and I'm looking forward to catching up with Serge while I'm there! My take on the points that he's making in this (long) post is actually what I'm talking about at the event: open initiatives need open organisations.


Universities do not exist ‘to produce students who are useful’, President says

Mr Higgins, who was opening a celebration of Trinity College Dublin’s College Historical Debating Society, said “universities are not there merely to produce students who are useful”.

“They are there to produce citizens who are respectful of the rights of others to participate and also to be able to participate fully, drawing on a wide range of scholarship,” he said on Monday night.

The President said there is a growing cohort of people who are alienated and “who feel they have lost their attachment to society and decision making”.

Jack Horgan-Jones (The Irish Times)

As a Philosophy graduate, I wholeheartedly agree with this, and also with his assessment of how people are obsessed with 'markets'.


Perennial philosophy

Not everyone will accept this sort of inclusivism. Some will insist on a stark choice between Jesus or hell, the Quran or hell. In some ways, overcertain exclusivism is a much better marketing strategy than sympathetic inclusivism. But if just some of the world’s population opened their minds to the wisdom of other religions, without having to leave their own faith, the world would be a better, more peaceful place. Like Aldous Huxley, I still believe in the possibility of growing spiritual convergence between different religions and philosophies, even if right now the tide seems to be going the other way.

Jules Evans (Aeon)

This is an interesting article about the philosophy of Aldous Huxley, whose books have always fascinated me. For some reason, I hadn't twigged that he was related to Thomas Henry Huxley (aka "Darwin's bulldog").


Photo by Scott Webb
Photo by Scott Webb

What the Death of iTunes Says About Our Digital Habits

So what really failed, maybe, wasn’t iTunes at all—it was the implicit promise of Gmail-style computing. The explosion of cloud storage and the invention of smartphones both arrived at roughly the same time, and they both subverted the idea that we should organize our computer. What they offered in its place was a vision of ease and readiness. What the idealized iPhone user and the idealized Gmail user shared was a perfect executive-functioning system: Every time they picked up their phone or opened their web browser, they knew exactly what they wanted to do, got it done with a calm single-mindedness, and then closed their device. This dream illuminated Inbox Zero and Kinfolk and minimalist writing apps. It didn’t work. What we got instead was Inbox Infinity and the algorithmic timeline. Each of us became a wanderer in a sea of content. Each of us adopted the tacit—but still shameful—assumption that we are just treading water, that the clock is always running, and that the work will never end.

Robinson Meyer (The Atlantic)

This is curiously-written (and well-written) piece, in the form of an ordered list, that takes you through the changes since iTunes launched. It's hard to disagree with the author's arguments.


Imagine a world without YouTube

But what if YouTube had failed? Would we have missed out on decades of cultural phenomena and innovative ideas? Would we have avoided a wave of dystopian propaganda and misinformation? Or would the internet have simply spiraled into new — yet strangely familiar — shapes, with their own joys and disasters?

Adi Robertson (The Verge)

I love this approach of imagining how the world would have been different had YouTube not been the massive success it's been over the last 15 years. Food for thought.


Big Tech Is Testing You

It’s tempting to look for laws of people the way we look for the laws of gravity. But science is hard, people are complex, and generalizing can be problematic. Although experiments might be the ultimate truthtellers, they can also lead us astray in surprising ways.

Hannah Fry (The New Yorker)

A balanced look at the way that companies, especially those we classify as 'Big Tech' tend to experiment for the purposes of engagement and, ultimately, profit. Definitely worth a read.


Photo by David Buchi
Photo by David Buchi

Trust people, not companies

The trend to tap into is the changing nature of trust. One of the biggest social trends of our time is the loss of faith in institutions and previously trusted authorities. People no longer trust the Government to tell them the truth. Banks are less trusted than ever since the Financial Crisis. The mainstream media can no longer be trusted by many. Fake news. The anti-vac movement. At the same time, we have a generation of people who are looking to their peers for information.

Lawrence Lundy (Outlier Ventures)

This post is making the case for blockchain-based technologies. But the wider point is a better one, that we should trust people rather than companies.


The Forest Spirits of Today Are Computers

Any sufficiently advanced technology is indistinguishable from nature. Agriculture de-wilded the meadows and the forests, so that even a seemingly pristine landscape can be a heavily processed environment. Manufactured products have become thoroughly mixed in with natural structures. Now, our machines are becoming so lifelike we can’t tell the difference. Each stage of technological development adds layers of abstraction between us and the physical world. Few people experience nature red in tooth and claw, or would want to. So, although the world of basic physics may always remain mindless, we do not live in that world. We live in the world of those abstractions.

George Musser (Nautilus)

This article, about artificial 'panpsychism' is really challenging to the reader's initial assumptions (well, mine at least) and really makes you think.


The man who refused to freeze to death

It would appear that our brains are much better at coping in the cold than dealing with being too hot. This is because our bodies’ survival strategies centre around keeping our vital organs running at the expense of less essential body parts. The most essential of all, of course, is our brain. By the time that Shatayeva and her fellow climbers were experiencing cognitive issues, they were probably already experiencing other organ failures elsewhere in their bodies.

William Park (BBC Future)

Not just one story in this article, but several with fascinating links and information.


Enjoy this? Sign up for the weekly roundup and/or become a supporter!


Header image by Tim Mossholder.

What the crowd requires is mediocrity of the highest order

What expectations do you have for your life? What were they aged seven? How about aged 17? Or 27? Did those expectations change? If so, why did they change?

Did you become a different person? Perhaps you met someone? Or did something unexpected happen?

Could it be that your expectations increased as time went on? Did you realise that you'd be able to surpass what other people thought you could achieve in your life? Perhaps you met people who had higher expectations of themselves and others?

If your expectations decreased as you got older, how do you feel about that? Did you tell yourself that you're being 'realistic'? Or did a comment or action from someone, or group of people, cause you to reassess things?

Maybe you came to a realisation that what you thought was important, actually wasn't really? Or, it could have been that what you thought was unimportant, actually was vitally important to you?

Was it your health? Were you sickly when younger and grew stronger over time? Or was it the other way around? Is your health an excuse? A crutch? Or have you flourished despite your constraints?

Have you ever reflected on your expectations of yourself? What about those of others? Are you harder on yourself than on other people? Or are you harder on them than you are on yourself?

What would your seven-year old self say? How about the 17 version of you? Would your 27 year-old self be happy about your current expectation levels?


Quotation-as-title by Auguste Préault. Image by Davide Ragusa. If you liked this, you'll love The Interrogative Mood by Padgett Powell.

Friday fluidity

I wasn't sure whether to share links about the Coronavirus this week, but obviously, like everyone else, I've been reading about it.

Next week, my wife and I are heading to Belgium as I'm speaking at an event, and then we're spending the weekend in Bruges. I think we'll be OK. But even if we do contract the virus, the chances of us dying, or even being seriously ill, are vanishingly small. It's all very well being pragmatic, but you can't live your life in fear.

Anyway, if you've heard enough about potential global pandemics, feel free to skip straight onto the second and third sections, where I share some really interesting links about organisations, productivtiy, security, and more!


How I track the coronavirus

I’ve been tracking it carefully for weeks, and have built up an online search strategy. I’d like to share a description of it here, partly in case it’s useful for readers, and also to request additions in case it’s missing anything.

Bryan Alexander

What I like about this post by Bryan is that he's sharing both his methods and go-to resources, without simultaneously sharing his conclusions. That's the mark of an open mind, and that's why I support him on Patreon.


Coronavirus and World After Capital

The danger we are now finding ourselves in can be directly traced to our reliance on the market mechanism for allocating attention. A global pandemic is an example of the kind of tail risk for which prices cannot exist. This is a key theme of my book World After Capital and I have been using pandemics as an alternative example to the climate crisis (another, while we are at it, are asteroid strikes).

Albert Wenger (Continuations)

I really must sit down and read World After Capital. In this short post, the author (a Venture Capitalist) explains why we need to allocate attention to what he calls 'tail risks'.


You’re Likely to Get the Coronavirus

Many countries have responded with containment attempts, despite the dubious efficacy and inherent harms of China’s historically unprecedented crackdown. Certain containment measures will be appropriate, but widely banning travel, closing down cities, and hoarding resources are not realistic solutions for an outbreak that lasts years. All of these measures come with risks of their own. Ultimately some pandemic responses will require opening borders, not closing them. At some point the expectation that any area will escape effects of COVID-19 must be abandoned: The disease must be seen as everyone’s problem.

James Hamblin (The Atlantic)

Will you get a cold at some point in your life? Yes, probably most winters in some form. Will you catch 'flu at some point in your life. Yes, probably, at some point. Will you get the Coronavirus. Almost certainly, but it's not going to kill you unless your very young, very old, or very weak.


Image by Ivan Bandura
Photo by Ivan Bandura

Work Operating Systems? No, We Need Work Ecosystems.

The principal limitation of the work OS concept is that companies do not operate independently: they are increasingly connected to other organizations. The model of work OS is too inwardly focused, when the real leverage may come from the interactions across company boundaries, or by lessening the barriers to cross-company cooperation. (In a sense, this is just the fullest expression of the ideal of cross-team and cross-department cooperation: if it’s good at the smallest scale, it is great at the largest scale.)

Stowe Boyd (GigaOM)

This post is interesting for a couple of reasons. First, I absolutely agree with the end game that Boyd describes here. Second, our co-op has just started using Monday.com and have found it... fine, and doing what we need, but I can't wait for some organisation to go beyond the 'work OS'.


Career Moats 101

A career moat is an individual’s ability to maintain competitive advantages over your competition (say, in the job market) in order to protect your long term prospects, your employability, and your ability to generate sufficient financial returns to support the life you want to live. Just like a medieval castle, the moat serves to protect those inside the fortress and their riches from outsiders.

cedric chin (Commonplace)

I came across links to two different posts on the same blog this week, which made me investigate it further. The central thesis of the blog is that we should aim to build 'career moats', which is certainly an interesting way of thinking about things, and this link has some practical advice.


Daily life with the offline laptop

Having access to the Internet is a gift, I can access anything or anyone. But this comes with a few drawbacks. I can waste my time on anything, which is not particularly helpful. There are so many content that I only scratch things, knowing it will still be there when I need it, and jump to something else. The amount of data is impressive, one human can’t absorb that much, we have to deal with it.

Solène Rapenne

I love this idea of having a machine that remains offline and which you use for music and writing. Especially the writing. In fact, I was talking to someone earlier this week about using my old 1080p monitor in portrait mode with a Raspberry Pi to create a 'writing machine'. I might just do it...


Photo by Lauren McConachie

Spilling over: How working openly with anxiety affects my team

At a fundamental level, I believe work is never done, that there is always another challenge to explore, other ways to have a larger impact. Leaders need to inspire and motivate us to embrace that reality as an exciting opportunity rather than an endless drudge or a source of continual worry.

Sam Knuth (Opensource.com)

This is a great article. As a leader and someone who's only admitted to myself recently that I am, indeed an 'anxious person', I see similarities with my experiences here.


5 tricks to make the internet less distracting, so you can get stuff done

Maybe you want to be more productive at work. Maybe you want to spend more time being creative or learning new skills. Or maybe you just wish you spent more time communicating with the people you love and less time scrolling through websites that bring you brief moments of joy just frequently enough that you’re willing to tolerate the broader feeling of anxiety/jealousy/outrage.

The internet can be an amazing tool for pursuing these goals, but it’s not necessarily designed to push you toward it. You’ve got to work to create the environment for yourself. Here are some ways you can do just that.

Justin Pot (Fast Company)

It's now over five years since I wrote Curate or Be Curated. The article, and the warning it contains, stands the test of time, I think. The 'tricks' shared in this Fast Company article, shared by Ian O'Byrne are a helpful place to start.


How to Dox Yourself on the Internet

To help our Times colleagues think like doxxers, we developed a formal program that consists of a series of repeatable steps that can be taken to clean up an online footprint. Our goal with this program is to empower people to control the information they share, and to provide them with tools and resources to have a better awareness around the information they intentionally and unintentionally share online.
We are now publicly releasing the content of this program for anyone to access. We think it is important for freelancers, activists, other newsrooms or people who want to take control of their own security online.

The NYT Open Team

This is a great idea. 'Doxxing' is the digging-up and sharing of personal information (e.g. home addresses) for the purposes of harrassment. This approach, where you try to 'dox' yourself so that you can take protective steps, is a great idea.


Header image by Adli Wahid who says "Rest in Peace Posters of Dr Li Wenliang, who warned authorities about the coronovirus outbreak seen at Hosier Lane in Melbourne, Australia. Hosier Lane is known for its street art. "

Friday facings

This week's links seem to have a theme about faces and looking at them through screens. I'm not sure what that says about either my network, or my interests, but there we are...

As ever, let me know what resonates with you, and if you have any thoughts on what's shared below!


The Age of Instagram Face

The human body is an unusual sort of Instagram subject: it can be adjusted, with the right kind of effort, to perform better and better over time. Art directors at magazines have long edited photos of celebrities to better match unrealistic beauty standards; now you can do that to pictures of yourself with just a few taps on your phone.

Jia Tolentino (The New Yorker)

People, especially women, but there's increasing pressure on young men too, are literally going to see plastic surgeons with 'Facetuned' versions of themselves. It's hard not to think that we're heading for a kind of dystopia when people want to look like cartoonish versions of themselves.


What Makes A Good Person?

What I learned as a child is that most people don’t even meet the responsibilities of their positions (husband, wife, teacher, boss, politicians, whatever.) A few do their duty, and I honor them for it, because it is rare. But to go beyond that and actually be a man of honor is unbelievably rare.

Ian Welsh

This question, as I've been talking with my therapist about, is one I ask myself all the time. Recently, I've settled on Marcus Aurelius' approach: "Waste no more time arguing about what a good man should be. Be one."


Boredom is but a window to a sunny day beyond the gloom

Boredom can be our way of telling ourselves that we are not spending our time as well as we could, that we should be doing something more enjoyable, more useful, or more fulfilling. From this point of view, boredom is an agent of change and progress, a driver of ambition, shepherding us out into larger, greener pastures.

Neel Burton (Aeon)

As I've discussed before, I'm not so sure about the fetishisation of 'boredom'. It's good to be creative and let the mind wander. But boredom? Nah. There's too much interesting stuff out there.


Resting Risk Face

Unlock your devices with a surgical mask that looks just like you.

I don't usually link to products in this roundup, but I'm not sure this is 100% serious. Good idea, though!


The world's biggest work-from-home experiment has been triggered by coronavirus

For some employees, like teachers who have conducted classes digitally for weeks, working from home can be a nightmare.
But in other sectors, this unexpected experiment has been so well received that employers are considering adopting it as a more permanent measure. For those who advocate more flexible working options, the past few weeks mark a possible step toward widespread -- and long-awaited -- reform.

Jessie Yeung (CNN)

Every cloud has a silver lining, I guess? Working from home is great, especially when you have a decent setup.


Setting Up Your Webcam, Lights, and Audio for Remote Work, Podcasting, Videos, and Streaming

Only you really know what level of clarity you want from each piece of your setup. Are you happy with what you have? Please, dear Lord, don't spend any money. This is intended to be a resource if you want more and don't know how to do it, not a stress or a judgment to anyone happy with their current setup

And while it's a lot of fun to have a really high-quality webcam for my remote work, would I have bought it if I didn't have a more intense need for high quality video for my YouTube stuff? Hell no. Get what you need, in your budget. This is just a resource.

This is a fantastic guide. I bought a great webcam when I saw it drop in price via CamelCamelCamel and bought a decent mic when I recorded the TIDE podcast wiht Dai. It really does make a difference.


Large screen phones: a challenge for UX design (and human hands)

I know it might sound like I have more questions than answers, but it seems to me that we are missing out on a very basic solution for the screen size problem. Manufacturers did so much to increase the screen size, computational power and battery capacity whilst keeping phones thin, that switching the apps navigation to the bottom should have been the automatic response to this new paradigm.

Maria Grilo (Imaginary Cloud)

The struggle is real. I invested in a new phone this week (a OnePlus 7 Pro 5G) and, unlike the phone it replaced from 2017, it's definitely a hold-with-two-hands device.


Society Desperately Needs An Alternative Web

What has also transpired is a web of unbridled opportunism and exploitation, uncertainty and disparity. We see increasing pockets of silos and echo chambers fueled by anxiety, misplaced trust, and confirmation bias. As the mainstream consumer lays witness to these intentions, we notice a growing marginalization that propels more to unplug from these communities and applications to safeguard their mental health. However, the addiction technology has produced cannot be easily remedied. In the meantime, people continue to suffer.

Hessie Jones (Forbes)

Another call to re-decentralise the web, this time based on arguments about centralised services not being able to handle the scale of abuse and fraudulent activity.


UK Google users could lose EU GDPR data protections

It is understood that Google decided to move its British users out of Irish jurisdiction because it is unclear whether Britain will follow GDPR or adopt other rules that could affect the handling of user data.

If British Google users have their data kept in Ireland, it would be more difficult for British authorities to recover it in criminal investigations.

The recent Cloud Act in the US, however, is expected to make it easier for British authorities to obtain data from US companies. Britain and the US are also on track to negotiate a broader trade agreement.

Samuel Gibbs (The Guardian)

I'm sure this is a business decision as well, but I guess it makes sense given post-Brexit uncertainty about privacy legislation. It's a shame, though, and a little concerning.


Enjoy this? Sign up for the weekly roundup, become a supporter, or download Thought Shrapnel Vol.1: Personal Productivity!


Header image by Luc van Loon

Thought Shrapnel Vol.1: Personal Productivity

Inspired by Venkatesh Rao's Ribbonfarm Roughs series, I've decided to start creating ebooks, collecting together in one place the best Thought Shrapnel articles on particular topics.

In this first of a series that I'll be publishing over the coming weeks and months, I've chosen to curate selected articles on personal productivity written between 2018 and 2019.

Some may see this as an opportunity to back Thought Shrapnel if they can't commit to supporting this work on a monthly basis. You may name your price for this book, with a suggested amount of £2.50 (currently around $3.25)

If nothing shows above, or you want a direct link to share, please try: https://gum.co/TSvol1

I'd welcome your feedback on this, including content, format, and length, in the comments section below, or by email.


Note: supporters have already received this book via email.

New to Thought Shrapnel? Try this!

I'm experimenting with turning articles from Thought Shrapnel into ebooks. Here's a sampler featuring five articles from this year so far.

Thought Shrapnel sampler
Thought Shrapnel sampler

The plan is to make a series of ebooks available free of charge to supporters, and sell them to people who may want to contribute to the continuation of Thought Shrapnel in a one-off way.

Friday feelings

It's Friday again, so I'm here trawling through not only the most interesting stuff that I've read this week, but also verbs that begin with the letter 'f'.

Happy Valentine's Day! Especially to my wonderful wife Hannah. We'll have been together 20 years this coming May 😍


Flying to Conferences

The problem - and the solution - to the issues of environment and poverty and the rest lie in the hands of those people who have the power to change what we're doing as a society, the one percent who hold most of the world's power and wealth. They benefit from environmental degradation and we pay the price, just as they benefit from oppressive labour laws, the corruption of government officials, and ownership of real and intellectual property.

Stephen Downes (halfanhour)

This is a fantastic post and one that's made me feel a bit better about the travel I do for work. Downes deconstructs various arguments, and shows the systemic problems around sustainability. Highly recommended.


Why innovation can't happen without standardization

Perceptions play a role in the conflict between standardization and innovation. People who only want to focus on standardization must remember that even the tools and processes that they want to promote as "the standard" were once new and represented change. Likewise, people who only want to focus on innovation have to remember that in order for a tool or process to provide value to an organization, it has to be stable enough for that organization to use it over time.

Len Dimaggio (opensource.com)

Opensource.com is celebrating its 10-year anniversary, and it's also a decade since I seem to have written for the first time about innovation being predicated on standardisation. I then expanded upon that a year later in this post. As DiMaggio says, innovation and standardisation are two halves of one solution.


How to reduce digital distractions: advice from medieval monks

Distraction is an old problem, and so is the fantasy that it can be dodged once and for all. There were just as many exciting things to think about 1,600 years ago as there are now. Sometimes it boggled the mind.

Jamie Kreiner (aeon)

This, via Kottke, has a title rendolent of clickbait, and is an amusing diversion. It's conclusion, however, is important, that distraction isn't due to our smartphones, but due to the ways our brains are wired, and our lack of practice concentrating on things that are of importance and value.


How Medieval Manuscript Makers Experimented with Graphic Design

The greater availability of paper in the 15th century meant more people could make books, with medical texts being some of the most popular. A guide to diagnosing diseases based on the colors of urine — a common approach in the era — has two pages illustrating several flasks, so the reader could readily compare this organized knowledge. A revolving “volvelle” diagram on another manuscript allowed readers to make their own astronomical calculations for the moon and time of night. Scraps of medieval songs on loose pages and herbals further demonstrate how practical usage was important in medieval design.

Allison Meier

I think I came across this via Hacker News, which is always a great place to find interesting stuff, technical and otherwise. The photographs and illustrations are just beautiful.


Yong Zhao: PISA Peculiarities (2): Should Schools Promote a Competitive or Cooperative Culture?

As I have written elsewhere, PISA has the bad habit of looking for things that would work universally to improve education or at least test scores and ignoring contextual factors that may actually play a more important role in the quality of education. In so doing, PISA does not (or cannot) have a coherent conceptual framework for understanding education as a contextual and situated phenomenon. As a result, it just throws various variables into the equation and wishes that some would turn out to be the magical policy or practice that improves education, without thinking how the variables act and interact with each other in specific contexts.

Yong Zhao (National education policy center)

Via Stephen Downes, I really appreciate this analysis of PISA test results, which compare students from different countries. To my mind, capitalism perpetuates the myth that we're all in competition with each other, inculcating it at school. Nothing could be further from the truth; we humans are communicators and co-operators.


1,000 True Fans? Try 100

The 100 True Fans concept isn’t for everyone, nor is 1,000 True Fans. Creators that have larger, more diffuse audiences with weaker allegiance or engagement are likely better off monetizing through sponsorships or branded products. For many, that path will be more lucrative—and require less heavy lifting—than designing the sort of high-value, personalized program 100 True Fans demand.

Li Jin (A16z)

An interesting read. There are currently 53 patrons of Thought Shrapnel, a number that I had hoped would be much higher by this point. Perhaps I need to pivot into exclusive content, or perhaps just return to sponsorship?


Regulator Ofcom to have more powers over UK social media

The government has now announced it is "minded" to grant new powers to Ofcom - which currently only regulates the media and the telecoms industry, not internet safety.

Ofcom will have the power to make tech firms responsible for protecting people from harmful content such as violence, terrorism, cyber-bullying and child abuse - and platforms will need to ensure that content is removed quickly.

They will also be expected to "minimise the risks" of it appearing at all.

BBC News

While I'm all for reducing the amount of distressing, radicalising, and harmful content accessed by vulnerable people, I do wonder exactly how this will work. A slide in a recent 'macro trends' deck by Benedict Evans shows the difficulties faced by platforms, and society more generally.


Why People Get the ‘Sunday Scaries’

When I asked Anne Helen Petersen what would cure the Sunday scaries, she laughed and gave a two-word answer: “Fix capitalism.” “You have to get rid of the conditions that are creating precarity,” she says. “People wouldn't think that universal health care has anything to do with the Sunday scaries, but it absolutely does … Creating a slightly different Sunday routine isn't going to change the massive structural problems.”

One potential system-wide change she has researched—smaller than implementing universal health care, but still big—is a switch to a four-day workweek. “When people had that one more day of leisure, it opened up so many different possibilities to do the things you actually want to do and to actually feel restored,” she says.

Joe Pinsker (The Atlantic)

As one t-shirt I saw put it: "You don't hate Mondays. You hate Capitalism."


A 2020 Retrospective on the History of Work

The future of work is Open. Open work practices allow for unhindered access to the right context, the bigger picture, and important information when it’s needed most. All teams can do amazing things when they work Open.  

Atlassian

Via Kottke, this is an interesting summary of changes in the workplace since the 1950s. And of course, given I'm part of a co-op that "works to spread the culture, processes and benefits of open" the conclusion is spot-on.


Enjoy this? Sign up for the weekly roundup and/or become a supporter!


Image by Nicola Fioravanti

Microcast #086 — Strategies for dealing with surveillance capitalism

Over the last year (at least) I've been talking about the dangers of surveillance capitalism. Stephen Haggard picked up on this and, after an email conversation, sent through an audio provocation for disucssion.

Microcast #086 — Strategies for dealing with surveillance capitalism

If you'd like to join this discussion, feel free to comment on this microcast, or reply with your own thoughts in audio or text format in a part of the web under your control!

Show notes


Enjoy this? Sign up for the weekly roundup and/or become a supporter!


Image cropped and rotated from an original by Tim Gouw

There are many non-essential activities, moths of precious time, and it's worse to take an interest in irrelevant things than do nothing at all

I confess to not yet having read Elizabeth Emens' book The Art of Life Admin but it's definitely on my list to read this year. A recent BBC Worklife article cites the book and the concept of 'attention residue'. This is defined as multiple tasks and obligations which split our attention and reduce our overall performance.

“If you have attention residue, you are basically operating with part of your cognitive resources being busy, and that can have a wide range of impacts – you might not be as efficient in your work, you might not be as good a listener, you may get overwhelmed more easily, you might make errors, or struggle with decisions and your ability to process information.”

Sophie Leroy (associate professor of management at the University of Washington)

Attention residue makes us procrastinate at work, and affects our sleep. And sleep, as I explained in my (unfinished) audiobook #uppingyourgame: a practical guide to personal productivity (v2) is one of the three pillars of productivity.

The other two, if you're wondering, are exercise and nutrition. (While I know very talented people who don't exercise nor look after their bodies, I don't know any very productive people who aren't careful about keeping active and what they put into their bodies.)

Back to attention residue, and as the author of the BBC article points out, getting rid of life admin and the associated attention residue means you can enjoy life a little more, guilt-free:

In my case, the GYLIO experiment proved that self-care is less about carving out time to relax amid chaos, and more about removing to-dos from our crowded lives. With some life admin cleared away, I had a bubble bath and enjoyed the smug delight of a life – momentarily – in order.

Madeleine Dore

For me, sleep is extremely important As I learned when our children were very small, I really can't function properly if I have less than seven hours' sleep for two nights in a row.

As a result, I tend to go to bed early, usually before my wife, and definitely having ensured that I've avoided screens after 21:00. I'm definitely in bed by 22:00 and then read until about 22:30.

That means, as has been happening recently, if I am disturbed around 05:30, I can get up and carve out some quiet time to myself before the family awakens. Usually, though, I sleep until around 06:30 which means that, according to my smartband, I'm well-rested.


While we're on the subject of sleep and sleepiness, if you drink coffee first thing in the morning, you might want to rethink that approach:

Source: CNBC

I stopped drinking coffee about a year and a half ago, and instead drink around three cups of tea over the course of the day. Otherwise, I've found, it's very easy to use caffeine as an accelerator pedal and alcohol as a brakepedal.


Without productive routines it's easy to become overwhelmed. In an article I shared in last Friday's link roundup about communicating better at work, Michael Natkin, suggests that feeling overwhelmed is a common situation:

We’ve all been there. You’ve got so much on your plate that you don’t know where to start. Things that look like they will take fifteen minutes balloon into five-day poop-storms. Every item you cross off your list seems to spawn three more. The check engine light just went on in your car. And now your boss is chasing you down for an unexpected fire drill. 

Michael Natkin

The temptation, when you're feeling overwhelmed, is to try and hide, to let no-one know that you're not coping. But that's a really dangerous approach, and the exact opposite of what you should do.

Instead, Natkin suggests an approach of 'over-communicating' which, he says, engages empathy and invites trust:

  1. Make a (prioritised) list
  2. Write an email to your line manager (and anyone else you should inform) giving realistic estimates of when your projects will be complete.
  3. Agree on a plan, and keep everyone updated

You should ask for feedback on your proposed course of action, he says, rather than giving it as a fait accompli.

I think this is a great strategy. What we all need to realise is that, usually, we were chosen for the position we're in, and therefore we should use that to fuel our confidence and self-esteem. Communicating a plan is always better than hiding.


Finally, a word about admin. Some people absolutely love spreadsheets, get a little thrill when they reconcile transactions, and don't mind filling in forms. If, like me, that sounds like the exact opposite of the things I enjoy doing, then you need some admin support.

You can pay for it, you can ask your employer to provide it, or you can call in favours. Either way, without it, you're going to eventually drown in life admin at home and work admin at the office.

My only bit of advice would be to really set your stall out for this. Don't whine or complain about your workload; instead, explain the situation and the impact of admin on your productivity. Put it in financial terms, if necessary.


What are your tips around "attention residue" and what to do when feeling overwhelmed?


Enjoy this? Sign up for the weekly roundup and/or become a supporter!


Image by Max Kleinen. Quotation-as-title by Baltasar Gracián

Friday flaggings

As usual, a mixed bag of goodies, just like you used to get from your favourite sweet shop as a kid. Except I don't hold the bottom of the bag, so you get full value.

Let me know which you found tasty and which ones suck (if you'll pardon the pun).


Andrei Tarkovsky’s Message to Young People: “Learn to Be Alone,” Enjoy Solitude

I don’t know… I think I’d like to say only that [young people] should learn to be alone and try to spend as much time as possible by themselves. I think one of the faults of young people today is that they try to come together around events that are noisy, almost aggressive at times. This desire to be together in order to not feel alone is an unfortunate symptom, in my opinion. Every person needs to learn from childhood how to spend time with oneself. That doesn’t mean he should be lonely, but that he shouldn’t grow bored with himself because people who grow bored in their own company seem to me in danger, from a self-esteem point of view.

Andrei Tarkovsky

This article in Open Culture quotes the film-maker Andrei Tarkovsky. Having just finished my first set of therapy sessions, I have to say that the metaphor of "puting on your own oxygen mask before helping others" would be a good takeaway from it. That sounds selfish, but as Tarkovsky points out here, other approaches can lead to the destruction of self-esteem.


Being a Noob

[T]here are two sources of feeling like a noob: being stupid, and doing something novel. Our dislike of feeling like a noob is our brain telling us "Come on, come on, figure this out." Which was the right thing to be thinking for most of human history. The life of hunter-gatherers was complex, but it didn't change as much as life does now. They didn't suddenly have to figure out what to do about cryptocurrency. So it made sense to be biased toward competence at existing problems over the discovery of new ones. It made sense for humans to dislike the feeling of being a noob, just as, in a world where food was scarce, it made sense for them to dislike the feeling of being hungry.

Paul Graham

I'm not sure about the evolutionary framing, but there's definitely something in this about having the confidence (and humility) to be a 'noob' and learn things as a beginner.


You Aren’t Communicating Nearly Enough

Imagine you were to take two identical twins and give them the same starter job, same manager, same skills, and the same personality. One competently does all of their work behind a veil of silence, not sharing good news, opportunities, or challenges, but just plugs away until asked for a status update. The other does the same level of work but communicates effectively, keeping their manager and stakeholders proactively informed. Which one is going to get the next opportunity for growth?

Michael Natkin

I absolutely love this post. As a Product Manager, I've been talking repeatedly recently about making our open-source project 'legible'. As remote workers, that means over-communicating and, as pointed out in this post, being proactive in that communication. Highly recommended.


The Boomer Blockade: How One Generation Reshaped the Workforce and Left Everyone Behind

This is a profound trend. The average age of incoming CEOs for S&P 500 companies has increased about 14 years over the last 14 years

From 1980 to 2001 the average age of a CEO dropped four years and then from 2005 to 2019 the averare incoming age of new CEOs increased 14 years!

This means that the average birth year of a CEO has not budged since 2005. The best predictor of becoming a CEO of our most successful modern institutions?

Being a baby boomer.

Paul Millerd

Wow. This, via Marginal Revolution, pretty much speaks for itself.


The Ed Tech suitcase

Consider packing a suitcase for a trip. It contains many different items – clothes, toiletries, books, electrical items, maybe food and drink or gifts. Some of these items bear a relationship to others, for example underwear, and others are seemingly unrelated, for example a hair dryer. Each brings their own function, which has a separate existence and relates to other items outside of the case, but within the case, they form a new category, that of “items I need for my trip.” In this sense the suitcase resembles the ed tech field, or at least a gathering of ed tech individuals, for example at a conference

If you attend a chemistry conference and have lunch with strangers, it is highly likely they will nearly all have chemistry degrees and PhDs. This is not the case at an ed tech conference, where the lunch table might contain people with expertise in computer science, philosophy, psychology, art, history and engineering. This is a strength of the field. The chemistry conference suitcase then contains just socks (but of different types), but the ed tech suitcase contains many different items. In this perspective then the aim is not to make the items of the suitcase the same, but to find means by which they meet the overall aim of usefulness for your trip, and are not random items that won’t be needed. This suggests a different way of approaching ed tech beyond making it a discipline.

Martin Weller

At the start of this year, it became (briefly) fashionable among ageing (mainly North American) men to state that they had "never been an edtech guy". Follwed by something something pedagogy or something something people. In this post, Martin Weller uses a handy metaphor to explain that edtech may not be a discipline, but it's a useful field (or area of focus) nonetheless.


Why Using WhatsApp is Dangerous

Backdoors are usually camouflaged as “accidental” security flaws. In the last year alone, 12 such flaws have been found in WhatsApp. Seven of them were critical – like the one that got Jeff Bezos. Some might tell you WhatsApp is still “very secure” despite having 7 backdoors exposed in the last 12 months, but that’s just statistically improbable.

[...]

Don’t let yourself be fooled by the tech equivalent of circus magicians who’d like to focus your attention on one isolated aspect all while performing their tricks elsewhere. They want you to think about end-to-end encryption as the only thing you have to look at for privacy. The reality is much more complicated. 

Pavel Durov

Facebook products are bad for you, for society, and for the planet. Choose alternatives and encourage others to do likewise.


Why private micro-networks could be the future of how we connect

The current social-media model isn’t quite right for family sharing. Different generations tend to congregate in different places: Facebook is Boomer paradise, Instagram appeals to Millennials, TikTok is GenZ central. (WhatsApp has helped bridge the generational divide, but its focus on messaging is limiting.)

Updating family about a vacation across platforms—via Instagram stories or on Facebook, for example—might not always be appropriate. Do you really want your cubicle pal, your acquaintance from book club, and your high school frenemy to be looped in as well?

Tanya Basu

Some apps are just before their time. Take Path, for example, which my family used for almost the entire eight years it was around, from 2010 to 2018. The interface was great, the experience cosy, and the knowledge that you weren't sharing with everyone outside of a close circle? Priceless.


'Anonymized' Data Is Meaningless Bullshit

While one data broker might only be able to tie my shopping behavior to something like my IP address, and another broker might only be able to tie it to my rough geolocation, that’s ultimately not much of an issue. What is an issue is what happens when those “anonymized” data points inevitably bleed out of the marketing ecosystem and someone even more nefarious uses it for, well, whatever—use your imagination. In other words, when one data broker springs a leak, it’s bad enough—but when dozens spring leaks over time, someone can piece that data together in a way that’s not only identifiable but chillingly accurate.

Shoshana Wodinsky

This idea of cumulative harm is a particularly difficult one to explain (and prove) not only in the world of data, but in every area of life.


"Hey Google, stop tracking me"

Google recently invented a third way to track who you are and what you view on the web.

[...]

Each and every install of Chrome, since version 54, have generated a unique ID. Depending upon which settings you configure, the unique ID may be longer or shorter.

[...]

So every time you visit a Google web page or use a third party site which uses some Google resource, this ID is sent to Google and can be used to track which website or individual page you are viewing. As Google’s services such as scripts, captchas and fonts are used extensively on the most popular web sites, it’s likely that Google tracks most web pages you visit.

Magic Lasso

Use Firefox. Use multi-account containers and extensions that protect your privacy.


The Golden Age of the Internet and social media is over

In the last year I have seen more and more researchers like danah boyd suggesting that digital literacies are not enough. Given that some on the Internet have weaponized these tools, I believe she is right. Moving beyond digital literacies means thinking about the epistemology behind digital literacies and helping to “build the capacity to truly hear and embrace someone else’s perspective and teaching people to understand another’s view while also holding their view firm” (boyd, March 9, 2018). We can still rely on social media for our news but we really owe it to ourselves to do better in further developing digital literacies, and knowing that just because we have discussions through screens that we should not be so narcissistic to believe that we MUST be right or that the other person is simply an idiot.

Jimmy Young

I'd argue, as I did recently in this talk, that what Young and boyd are talking about here is actually a central tenet of digital literacies.


Image via Introvert doodles


Enjoy this? Sign up for the weekly roundup and/or become a supporter!

Software ate the world, so all the world’s problems get expressed in software

Benedict Evans recently posted his annual 'macro trends' slide deck. It's incredibly insightful, and work of (minimalist) art. This article's title comes from his conclusion, and you can see below which of the 128 slides jumped out at me from deck:

For me, what the deck as a whole does is place some of the issues I've been thinking about in a wider context.


My team is building a federated social network for educators, so I'm particularly tuned-in to conversations about the effect social media is having on society. A post by Harold Jarche where he writes about his experience of Twitter as a rage machine caught my attention, especially the part where he talks about how people are happy to comment based on the 'preview' presented to them in embedded tweets:

Research on the self-perception of knowledge shows how viewing previews without going to the original article gives an inflated sense of understanding on the subject, “audiences who only read article previews are overly confident in their knowledge, especially individuals who are motivated to experience strong emotions and, thus, tend to form strong opinions.” Social media have created a worldwide Dunning-Kruger effect. Our collective self-perception of knowledge acquired through social media is greater than it actually is.

Harold Jarche

I think our experiment with general-purpose social networks is slowly coming to an end, or at least will do over the next decade. What I mean is that, while we'll still have places where you can broadcast anything to anyone, the digital environments we'll spend more time will be what Venkatesh Rao calls the 'cozyweb':

Unlike the main public internet, which runs on the (human) protocol of “users” clicking on links on public pages/apps maintained by “publishers”, the cozyweb works on the (human) protocol of everybody cutting-and-pasting bits of text, images, URLs, and screenshots across live streams. Much of this content is poorly addressable, poorly searchable, and very vulnerable to bitrot. It lives in a high-gatekeeping slum-like space comprising slacks, messaging apps, private groups, storage services like dropbox, and of course, email.

Venkatesh Rao

That's on a personal level. I should imagine organisational spaces will be a bit more organised. Back to Jarche:

We need safe communities to take time for reflection, consideration, and testing out ideas without getting harassed. Professional social networks and communities of practices help us make sense of the world outside the workplace. They also enable each of us to bring to bear much more knowledge and insight that we could do on our own.

Harold Jarche

...or to use Rao's diagram which is so-awful-it's-useful:

Image by Venkatesh Rao

Of course, blockchain/crypto could come along and solve all of our problems. Except it won't. Humans are humans (are humans).


Ever since Eli Parisier's TED talk urging us to beware online "filter bubbles" people have been wringing their hands about ensuring we have 'balance' in our networks.

Interestingly, some recent research by the Reuters Institute at Oxford University, paints a slightly different picture. The researcher, Dr Richard Fletcher begins by investigating how people access the news.

Preferred access to news
Diagram via the Reuters Institute, Oxford University

Fletcher draws a distinction between different types of personalisation:

Self-selected personalisation refers to the personalisations that we voluntarily do to ourselves, and this is particularly important when it comes to news use. People have always made decisions in order to personalise their news use. They make decisions about what newspapers to buy, what TV channels to watch, and at the same time which ones they would avoid

Academics call this selective exposure. We know that it's influenced by a range of different things such as people's interest levels in news, their political beliefs and so on. This is something that has pretty much always been true.

Pre-selected personalisation is the personalisation that is done to people, sometimes by algorithms, sometimes without their knowledge. And this relates directly to the idea of filter bubbles because algorithms are possibly making choices on behalf of people and they may not be aware of it.

The reason this distinction is particularly important is because we should avoid comparing pre-selected personalisation and its effects with a world where people do not do any kind of personalisation to themselves. We can't assume that offline, or when people are self-selecting news online, they're doing it in a completely random way. People are always engaging in personalisation to some extent and if we want to understand the extent of pre-selected personalisation, we have to compare it with the realistic alternative, not hypothetical ideals.

Dr Richard Fletcher

Read the article for the details, but the takeaways for me were twofold. First, that we might be blaming social media for wider and deeper divisons within society, and second, that teaching people to search for information (rather than stumble across it via feeds) might be the best strategy:

People who use search engines for news on average use more news sources than people who don't. More importantly, they're more likely to use sources from both the left and the right. 
People who rely mainly on self-selection tend to have fairly imbalanced news diets. They either have more right-leaning or more left-leaning sources. People who use search engines tend to have a more even split between the two.

Dr Richard Fletcher

Useful as it is, what I think this research misses out is the 'black box' algorithms that seek to keep people engaged and consuming content. YouTube is the poster child for this. As Jarche comments:

We are left in a state of constant doubt as conspiratorial content becomes easier to access on platforms like YouTube than accessing solid scientific information in a journal, much of which is behind a pay-wall and inaccessible to the general public.

Harold Jarche

This isn't an easy problem to solve.


We might like to pretend that human beings are rational agents, but this isn't actually true. Let's take something like climate change. We're not arguing about the facts here, we're arguing about politics. Adrian Bardon, writing in Fast Company, writes:

In theory, resolving factual disputes should be relatively easy: Just present evidence of a strong expert consensus. This approach succeeds most of the time, when the issue is, say, the atomic weight of hydrogen.

But things don’t work that way when the scientific consensus presents a picture that threatens someone’s ideological worldview. In practice, it turns out that one’s political, religious, or ethnic identity quite effectively predicts one’s willingness to accept expertise on any given politicized issue.

Adrian Bardon

This is pretty obvious when we stop to think about it for a moment; beliefs are bound up with identity, and that's not something that's so easy to change.

In ideologically charged situations, one’s prejudices end up affecting one’s factual beliefs. Insofar as you define yourself in terms of your cultural affiliations, information that threatens your belief system—say, information about the negative effects of industrial production on the environment—can threaten your sense of identity itself. If it’s part of your ideological community’s worldview that unnatural things are unhealthful, factual information about a scientific consensus on vaccine or GM food safety feels like a personal attack.

Adrian Bardon

So how do we change people's minds when they're objectively wrong? Brian Resnick, writing for Vox, suggests the best approach might be 'deep canvassing':

Giving grace. Listening to a political opponent’s concerns. Finding common humanity. In 2020, these seem like radical propositions. But when it comes to changing minds, they work.

[...]

The new research shows that if you want to change someone’s mind, you need to have patience with them, ask them to reflect on their life, and listen. It’s not about calling people out or labeling them fill-in-the-blank-phobic. Which makes it feel like a big departure from a lot of the current political dialogue.

Brian Resnick

This approach, it seems, works:

Diagram by Stanford University, via Vox

So it seems there is some hope to fixing the world's problems. It's just that the solutions point towards doing the hard work of talking to people and not just treating them as containers for opinions to shoot down at a distance.


Enjoy this? Sign up for the weekly roundup and/or become a supporter!

Friday featherings

Behold! The usual link round-up of interesting things I've read in the last week.

Feel free to let me know if anything particularly resonated with you via the comments section below...


Part I - What is a Weird Internet Career?

Weird Internet Careers are the kinds of jobs that are impossible to explain to your parents, people who somehow make a living from the internet, generally involving a changing mix of revenue streams. Weird Internet Career is a term I made up (it had no google results in quotes before I started using it), but once you start noticing them, you’ll see them everywhere. 

Gretchen McCulloch (All Things Linguistic)

I love this phrase, which I came across via Dan Hon's newsletter. This is the first in a whole series of posts, which I am yet to explore in its entirety. My aim in life is now to make my career progressively more (internet) weird.


Nearly half of Americans didn’t go outside to recreate in 2018. That has the outdoor industry worried.

While the Outdoor Foundation’s 2019 Outdoor Participation Report showed that while a bit more than half of Americans went outside to play at least once in 2018, nearly half did not go outside for recreation at all. Americans went on 1 billion fewer outdoor outings in 2018 than they did in 2008. The number of adolescents ages 6 to 12 who recreate outdoors has fallen four years in a row, dropping more than 3% since 2007 

The number of outings for kids has fallen 15% since 2012. The number of moderate outdoor recreation participants declined, and only 18% of Americans played outside at least once a week. 

Jason Blevins (The Colorado Sun)

One of Bruce Willis' lesser-known films is Surrogates (2009). It's a short, pretty average film with a really interesting central premise: most people stay at home and send their surrogates out into the world. Over a decade after the film was released, a combination of things (including virulent viruses, screen-focused leisure time, and safety fears) seem to suggest it might be a predictor of our medium-term future.


I’ll Never Go Back to Life Before GDPR

It’s also telling when you think about what lengths companies have had to go through to make the EU versions of their sites different. Complying with GDPR has not been cheap. Any online business could choose to follow GDPR by default across all regions and for all visitors. It would certainly simplify things. They don’t, though. The amount of money in data collection is too big.

Jill Duffy (OneZero)

This is a strangely-titled article, but a decent explainer on what the web looks and feels like to those outside the EU. The author is spot-on when she talks about how GDPR and the recent California Privacy Law could be applied everywhere, but they're not. Because surveillance capitalism.


You Are Now Remotely Controlled

The belief that privacy is private has left us careening toward a future that we did not choose, because it failed to reckon with the profound distinction between a society that insists upon sovereign individual rights and one that lives by the social relations of the one-way mirror. The lesson is that privacy is public — it is a collective good that is logically and morally inseparable from the values of human autonomy and self-determination upon which privacy depends and without which a democratic society is unimaginable.

Shoshana Zuboff (The New York Times)

I fear that the length of Zuboff's (excellent) book on surveillance capitalism, her use of terms in this article such as 'epistemic inequality, and the subtlety of her arguments, may mean that she's preaching to the choir here.


How to Raise Media-Savvy Kids in the Digital Age

The next time you snap a photo together at the park or a restaurant, try asking your child if it’s all right that you post it to social media. Use the opportunity to talk about who can see that photo and show them your privacy settings. Or if a news story about the algorithms on YouTube comes on television, ask them if they’ve ever been directed to a video they didn’t want to see.

Meghan Herbst (WIRED)

There's some useful advice in this WIRED article, especially that given by my friend Ian O'Byrne. The difficulty I've found is when one of your kids becomes a teenager and companies like Google contact them directly telling them they can have full control of their accounts, should they wish...


Control-F and Building Resilient Information Networks

One reason the best lack conviction, though, is time. They don’t have the time to get to the level of conviction they need, and it’s a knotty problem, because that level of care is precisely what makes their participation in the network beneficial. (In fact, when I ask people who have unintentionally spread misinformation why they did so, the most common answer I hear is that they were either pressed for time, or had a scarcity of attention to give to that moment)

But what if — and hear me out here — what if there was a way for people to quickly check whether linked articles actually supported the points they claimed to? Actually quoted things correctly? Actually provided the context of the original from which they quoted

And what if, by some miracle, that function was shipped with every laptop and tablet, and available in different versions for mobile devices?

This super-feature actually exists already, and it’s called control-f.

Roll the animated GIF!

Mike Caulfield (Hapgood)

I find it incredible, but absolutely believable, that only around 10% of internet users know how to use Ctrl-F to find something within a web page. On mobile, it's just as easy, as there's an option within most (all?) browsers to 'search within page'. I like Mike's work, as not only is it academic, it's incredibly practical.


EdX launches for-credit credentials that stack into bachelor's degrees

The MicroBachelors also mark a continued shift for EdX, which made its name as one of the first MOOC providers, to a wider variety of educational offerings 

In 2018, EdX announced several online master's degrees with selective universities, including the Georgia Institute of Technology and the University of Texas at Austin.

Two years prior, it rolled out MicroMasters programs. Students can complete the series of graduate-level courses as a standalone credential or roll them into one of EdX's master's degrees.

That stackability was something EdX wanted to carry over into the MicroBachelors programs, Agarwal said. One key difference, however, is that the undergraduate programs will have an advising component, which the master's programs do not. 

Natalie Schwartz (Education Dive)

This is largely a rewritten press release with a few extra links, but I found it interesting as it's a concrete example of a couple of things. First, the ongoing shift in Higher Education towards students-as-customers. Second, the viability of microcredentials as a 'stackable' way to build a portfolio of skills.

Note that, as a graduate of degrees in the Humanities, I'm not saying this approach can be used for everything, but for those using Higher Education as a means to an end, this is exactly what's required.


How much longer will we trust Google’s search results?

Today, I still trust Google to not allow business dealings to affect the rankings of its organic results, but how much does that matter if most people can’t visually tell the difference at first glance? And how much does that matter when certain sections of Google, like hotels and flights, do use paid inclusion? And how much does that matter when business dealings very likely do affect the outcome of what you get when you use the next generation of search, the Google Assistant?

Dieter Bohn (The Verge)

I've used DuckDuckGo as my go-to search engine for years now. It used to be that I'd have to switch to Google for around 10% of my searches. That's now down to zero.


Coaching – Ethics

One of the toughest situations for a product manager is when they spot a brewing ethical issue, but they’re not sure how they should handle the situation.  Clearly this is going to be sensitive, and potentially emotional. Our best answer is to discover a solution that does not have these ethical concerns, but in some cases you won’t be able to, or may not have the time.

[...]

I rarely encourage people to leave their company, however, when it comes to those companies that are clearly ignoring the ethical implications of their work, I have and will continue to encourage people to leave.

Marty Cagan (SVPG)

As someone with a sensitive radar for these things, I've chosen to work with ethical people and for ethical organisations. As Cagan says in this post, if you're working for a company that ignores the ethical implications of their work, then you should leave. End of story.


Image via webcomic.name

Microcast #085 — Extensions for Mozilla Firefox

In the last quarter of 2019, I got rid of my Google Pixelbook and Chromebox, and switched full-time to Linux and Firefox.

I still need to dip into Chromium occasionally to use Loom but, on the whole, I'm really happy with my new setup. In this microcast, I go through my Firefox extensions and the reasons I have them installed.

Microcast #085 — Extensions for Mozilla Firefox

Show notes

The following are links to the Firefox Add-ons directory:


Image by emylo0 from Pixabay

To others we are not ourselves but a performer in their lives cast for a part we do not even know that we are playing

Surveillance, technology, and society

Last week, the London Metropolitan Police ('the Met') proudly announced that they've begun using 'LFR', which is their neutral-sounding acronym for something incredibly invasive to the privacy of everyday people in Britain's capital: Live Facial Recognition.

It's obvious that the Met expect some pushback here:

The Met will begin operationally deploying LFR at locations where intelligence suggests we are most likely to locate serious offenders. Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences. 

At a deployment, cameras will be focused on a small, targeted area to scan passers-by. The cameras will be clearly signposted and officers deployed to the operation will hand out leaflets about the activity. The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.

London Metropolitan Police

Note the talk of 'intelligence' and 'bespoke watch lists', as well as promises that LFR will not be linked any other systems. (ANPR, for those not familiar with it, is 'Automatic Number Plate Recognition'.) This, of course, is the thin end of the wedge and how these things start — in a 'targeted' way. They're expanded later, often when the fuss has died down.


Meanwhile, a lot of controversy surrounds an app called Clearview AI which scrapes publicly-available data (e.g. Twitter or YouTube profiles) and applies facial recognition algorithms. It's already in use by law enforcement in the USA.

The size of the Clearview database dwarfs others in use by law enforcement. The FBI's own database, which taps passport and driver's license photos, is one of the largest, with over 641 million images of US citizens.

The Clearview app isn't available to the public, but the Times says police officers and Clearview investors think it will be in the future.

The startup said in a statement Tuesday that its "technology is intended only for use by law enforcement and security personnel. It is not intended for use by the general public." 

Edward Moyer (CNET)

So there we are again, the technology is 'intended' for one purpose, but the general feeling is that it will leak out into others. Imagine the situation if anyone could identify almost anyone on the planet simply by pointing their smartphone at them for a few seconds?

This is a huge issue, and one that politicians and lawmakers on both sides of the Atlantic are both ill-equipped to deal with and particularly concerned about. As the BBC reports, the European Commission is considering a five-year ban on facial recognition in public spaces while it figures out how to regulate the technology:

The Commission set out its plans in an 18-page document, suggesting that new rules will be introduced to bolster existing regulation surrounding privacy and data rights.

It proposed imposing obligations on both developers and users of artificial intelligence, and urged EU countries to create an authority to monitor the new rules.

During the ban, which would last between three and five years, "a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed".

BBC News

I can't see the genie going back in this particular bottle and, as Ian Welsh puts it, this is the end of public anonymity. He gives the examples of the potential for all kinds of abuse, from an increase in rape, to abuse by corporations, to an increase in parental surveillance of children.

The larger issue is this: people who are constantly under surveillance become super conformers out of defense. Without true private time, the public persona and the private personality tend to collapse together. You need a backstage — by yourself and with a small group of friends to become yourself. You need anonymity.

When everything you do is open to criticism by everyone, you will become timid and conforming.

When governments, corporations, schools and parents know everything, they will try to control everything. This often won’t be for your benefit.

Ian Welsh

We already know that self-censorship is the worst kind of censorship, and live facial recognition means we're going to have to do a whole lot more of it in the near future.

So what can we do about it? Welsh thinks that this technology should be made illegal, which is one option. However, you can't un-invent technologies. So live facial recognition is going to be used (lawfully) by some organisations, even if it were restricted to state operatives. I'm not sure if that's better or worse than everyone having it?


At a recent workshop I ran, I was talking during one of the breaks to one person who couldn't really see the problem I had raised about surveillance capitalism. I have to wonder if they would have a problem with live facial recognition? From our conversation, I'd suspect not.

Remember that facial recognition is not 100% accurate and (realistically) never can be. So there will be false positives. Let's say your face ends up on a 'watch list' or a 'bad actor' database shared with many different agencies and retailers. All of a sudden, you've got yourself a very big problem.


As BuzzFeed News reports, around half of US retailers are either using live facial recognition, or have plans to use it. At the moment, companies like FaceFirst do not facilitate the sharing of data across their clients, but you can see what's coming next:

[Peter Trepp, CEO of FaceFirst] said the database is not shared with other retailers or with FaceFirst directly. All retailers have their own policies, but Trepp said often stores will offer not to press charges against apprehended shoplifters if they agree to opt into the store’s shoplifter database. The files containing the images and identities of people on “the bad guy list” are encrypted and only accessible to retailers using their own systems, he said.

FaceFirst automatically purges visitor data that does not match information in a criminal database every 14 days, which is the company’s minimum recommendation for auto-purging data. It’s up to the retailer if apprehended shoplifters or people previously on the list can later opt out of the database.

Leticia Miranda (BuzzFeed News)

There is no opt-in, no consent sought or gathered by retailers. This is a perfect example of technology being light years ahead of lawmaking.


This is all well-and-good in situations where adults are going into public spaces, but what about schools, where children are often only one step above prisoners in terms of the rights they enjoy?

Recode reports that, in schools, the surveillance threat to students goes beyond facial recognition. So long as authorities know generally what a student looks like, they can track them everywhere they go:

Appearance Search can find people based on their age, gender, clothing, and facial characteristics, and it scans through videos like facial recognition tech — though the company that makes it, Avigilon, says it doesn’t technically count as a full-fledged facial recognition tool

Even so, privacy experts told Recode that, for students, the distinction doesn’t necessarily matter. Appearance Search allows school administrators to review where a person has traveled throughout campus — anywhere there’s a camera — using data the system collects about that person’s clothing, shape, size, and potentially their facial characteristics, among other factors. It also allows security officials to search through camera feeds using certain physical descriptions, like a person’s age, gender, and hair color. So while the tool can’t say who the person is, it can find where else they’ve likely been.

Rebecca Heilweil (Recode)

This is a good example of the boundaries of technology that may-or-may-not be banned at some point in the future. The makers of Appearance Search, Avigilon, claim that it's not facial recognition technology because the images it captures and analyses are tied to the identity of a particular person:

Avigilon’s surveillance tool exists in a gray area: Even privacy experts are conflicted over whether or not it would be accurate to call the system facial recognition. After looking at publicly available content about Avigilon, Leong said it would be fairer to call the system an advanced form of characterization, meaning that the system is making judgments about the attributes of that person, like what they’re wearing or their hair, but it’s not actually claiming to know their identity.

Rebecca Heilweil (Recode)

You can give as many examples of the technology being used for good as you want — there's one in this article about how the system helped discover a girl was being bullied, for example — but it's still intrusive surveillance. There are other ways of getting to the same outcome.


We do not live in a world of certainty. We live in a world where things are ambiguous, unsure, and sometimes a little dangerous. While we should seek to protect one another, and especially those who are most vulnerable in society, we should think about the harm we're doing by forcing people to live the totality of their lives in public.

What does that do to our conceptions of self? To creativity? To activism? Live facial recognition technology, as well as those technologies that exist in a grey area around it, is the hot-button issue of the 2020s.


Image by Kirill Sharkovski. Quotation-as-title by Elizabeth Bibesco.

Friday festoonings

Check out these things I read and found interesting this week. Thanks to some positive feedback, I've carved out time for some commentary, and changed the way this link roundup is set out.

Let me know what you think! What did you find most interesting?


Maps Are Biased Against Animals

Critics may say that it is unreasonable to expect maps to reflect the communities or achievements of nonhumans. Maps are made by humans, for humans. When beavers start Googling directions to a neighbor’s dam, then their homes can be represented! For humans who use maps solely to navigate—something that nonhumans do without maps—man-made roads are indeed the only features that are relevant. Following a map that includes other information may inadvertently lead a human onto a trail made by and for deer.

But maps are not just tools to get from points A to B. They also relay new and learned information, document evolutionary changes, and inspire intrepid exploration. We operate on the assumption that our maps accurately reflect what a visitor would find if they traveled to a particular area. Maps have immense potential to illustrate the world around us, identifying all the important features of a given region. By that definition, the current maps that most humans use fall well short of being complete. Our definition of what is “important” is incredibly narrow.

Ryan Huling (WIRED)

Cartography is an incredibly powerful tool. We've known for a long time that “the map is not the territory” but perhaps this is another weapon in the fight against climate change and the decline in diversity of species?


Why Actually Principled People Are Difficult (Glenn Greenwald Edition)

Then you get people like Greenwald, Assange, Manning and Snowden. They are polarizing figures. They are loved or hated. They piss people off.

They piss people off precisely because they have principles they consider non-negotiable. They will not do the easy thing when it matters. They will not compromise on anything that really matters.

That’s breaking the actual social contract of “go along to get along”, “obey authority” and “don’t make people uncomfortable.” I recently talked to a senior activist who was uncomfortable even with the idea of yelling at powerful politicians. It struck them as close to violence.

So here’s the thing, people want men and women of principle to be like ordinary people.

They aren’t. They can’t be. If they were, they wouldn’t do what they do. Much of what you may not like about a Greenwald or Assange or Manning or Snowden is why they are what they are. Not just the principle, but the bravery verging on recklessness. The willingness to say exactly what they think, and do exactly what they believe is right even if others don’t.

Ian Welsh

Activists like Greta Thunberg and Edward Snowden are the closest we get to superheroes, to people who stand for the purest possible version of an idea. This is why we need them — and why we're so disappointed when they turn out to be human after all.


Explicit education

Students’ not comprehending the value of engaging in certain ways is more likely to be a failure in our teaching than their willingness to learn (especially if we create a culture in which success becomes exclusively about marks and credentialization). The question we have to ask is if what we provide as ‘university’ goes beyond the value of what our students can engage with outside of our formal offer. 

Dave White

This is a great post by Dave, who I had the pleasure of collaborating with briefly during my stint at Jisc. I definitely agree that any organisation walks a dangerous path when it becomes overly-fixated on the 'how' instead of the 'what' and the 'why'.


What Are Your Rules for Life? These 11 Expressions (from Ancient History) Might Help

The power of an epigram or one of these expressions is that they say a lot with a little. They help guide us through the complexity of life with their unswerving directness. Each person must, as the retired USMC general and former Secretary of Defense Jim Mattis, has said, “Know what you will stand for and, more important, what you won’t stand for.” “State your flat-ass rules and stick to them. They shouldn’t come as a surprise to anyone.”

Ryan Holiday

Of the 11 expressions here, I have to say that other than memento mori (“remember you will die”) I particularly like semper anticus (“always forward”) which I'm going to print out in a fancy font and stick on the wall of my home office.


Dark Horse Discord

In a hypothetical world, you could get a Discord (or whatever is next) link for your new job tomorrow – you read some wiki and meta info, sort yourself into your role you’d, and then are grouped with the people who you need to collaborate with on a need be basis. All wrapped in one platform. Maybe you have an HR complaint - drop it in #HR where you can’t read the messages but they can, so it’s a blind 1 way conversation. Maybe there is a #help channel, where you ping you write your problems and the bot pings people who have expertise based on keywords. There’s a lot of things you can do with this basic design.

Mule's Musings

What is described in this post is a bit of a stretch, but I can see it: a world where work is organised a bit like how gamers organisers in chat channels. Something to keep an eye on, as the interplay between what's 'normal' and what's possible with communications technology changes and evolves.


The Edu-Decade That Was: Unfounded Optimism?

What made the last decade so difficult is how education institutions let corporations control the definitions so that a lot of “study and ethical practice” gets left out of the work. With the promise of ease of use, low-cost, increased student retention (or insert unreasonable-metric-claim here), etc. institutions are willing to buy into technology without regard to accessibility, scalability, equity and inclusion, data privacy or student safety, in hope of solving problem X that will then get to be checked off of an accreditation list. Or worse, with the hope of not having to invest in actual people and local infrastructure.

Geoff Cain (Brainstorm in progress)

It's nice to see a list of some positives that came out of the last decades, and for microcredentials and badging to be on that list.


When Is a Bird a ‘Birb’? An Extremely Important Guide

First, let’s consider the canonized usages. The subreddit r/birbs defines a birb as any bird that’s “being funny, cute, or silly in some way." Urban Dictionary has a more varied set of definitions, many of which allude to a generalized smallness. A video on the youtube channel Lucidchart offers its own expansive suggestions: All birds are birbs, a chunky bird is a borb, and a fluffed-up bird is a floof. Yet some tension remains: How can all birds be birbs if smallness or cuteness are in the equation? Clearly some birds get more recognition for an innate birbness.

Asher Elbein (Audubon magazine)

A fun article, but also an interesting one when it comes to ambiguity, affinity groups, and internet culture.


Why So Many Things Cost Exactly Zero

“Now, why would Gmail or Facebook pay us? Because what we’re giving them in return is not money but data. We’re giving them lots of data about where we go, what we eat, what we buy. We let them read the contents of our email and determine that we’re about to go on vacation or we’ve just had a baby or we’re upset with our friend or it’s a difficult time at work. All of these things are in our email that can be read by the platform, and then the platform’s going to use that to sell us stuff.”

Fiona Scott Morton (Yale business school) quoted by Peter coy (Bloomberg Businessweek)

Regular readers of Thought Shrapnel know all about surveillance capitalism, but it's good to see these explainers making their way to the more mainstream business press.


Your online activity is now effectively a social ‘credit score’

The most famous social credit system in operation is that used by China's government. It "monitors millions of individuals' behavior (including social media and online shopping), determines how moral or immoral it is, and raises or lowers their "citizen score" accordingly," reported Atlantic in 2018.

"Those with a high score are rewarded, while those with a low score are punished." Now we know the same AI systems are used for predictive policing to round up Muslim Uighurs and other minorities into concentration camps under the guise of preventing extremism.

Violet Blue (Engadget)

Some (more prudish) people will write this article off because it discusses sex workers, porn, and gay rights. But the truth is that all kinds of censorship start with marginalised groups. To my mind, we're already on a trajectory away from Silicon Valley and towards Chinese technology. Will we be able to separate the tech from the morality?


Panicking About Your Kids’ Phones? New Research Says Don’t

The researchers worry that the focus on keeping children away from screens is making it hard to have more productive conversations about topics like how to make phones more useful for low-income people, who tend to use them more, or how to protect the privacy of teenagers who share their lives online.

“Many of the people who are terrifying kids about screens, they have hit a vein of attention from society and they are going to ride that. But that is super bad for society,” said Andrew Przybylski, the director of research at the Oxford Internet Institute, who has published several studies on the topic.

Nathaniel Popper (The New York Times)

Kids and screentime is just the latest (extended) moral panic. Overuse of anything causes problems, smartphones, games consoles, and TV included. What we need to do is to help our children find balance in all of this, which can be difficult for the first generation of parents navigating all of this on the frontline.


Gorgeous header art via the latest Facebook alternative, planetary.social

Microcast #084 - Chris Dixon on RSS, crypto, and community ownership of the internet

I don't often listen to the a16z podcast but for some reason I decided to listen to an episode about the past, present, and future of the internet while out for a long walk.

In it, Jonah Peretti, founder and CEO of Buzzfeed, interviews Chris Dixon, a partner at VC firm Andreessen Horowitz. A section of it really struck me, which I'd like to share with you now.

Microcast #084 - Chris Dixon on RSS, crypto, and community ownership of the internet

I'd be interested in your thoughts on it, too. Are you optimistic about the kind of approach that Dixon outlines?

Show notes

How you do anything is how you do everything

So said Derek Sivers, although I suspect that, originally, it's probably a core principle of Zen Buddhism. In this article I want to talk about management and leadership. But also about emotional intelligence and integrity.


I currently spend part of my working life as a Product Manager. At some organisations, this means that you're in charge of the budget, and pull in colleagues from different disciplines. For example, a designer you're working with on a particular project might report to the Head of UX. Matrix-style management and internal budgeting keeps track of everything.

This approach can get complicated so, at other companies (like the one I'm working with), the Product Manager manages both people and product. It's a lot of work, as both can be complicated.

I think I'm OK at managing people, and other people say I'm good at it, but it's not my favourite thing in the world to do.

That's why, when hiring, I try to do so in one of three ways. Ideally, I want to hire people with whom at least one member of the existing team has already worked and can vouch for. If that doesn't work, then I'm looking for people vouched for my the networks of which the team are part. Failing that, I'm trying to find people who don't wait for direction, but know how to get on with things that need doing.

It's an approach I've developed from the work of Laura Thomson. She's a former colleague at Mozilla, and an advocate of a chaordic style of management and self-organising ducks:

Instead of having ‘all your ducks in a row’ the analogy in chaordic management is to have ‘self-organising ducks’. The idea is to give people enough autonomy, knowledge and skill to be able to do the management themselves.

As I've said before, the default way of organising human beings is hierarchy. That doesn't mean it's the best way. Hierachy tends to lean on processes, paperwork and meetings to 'get things done' but even a cursory glance at Open Source projects shows that all of this isn't strictly necessary.


Last week, a new-ish member of the team said that I can be "too nice". I'm still processing that and digging into what they meant, but I then ended up reading an article by Roddy Millar for Fast Company entitled Here’s why being likable may make you a less effective leader.

It's a slightly oddly-framed article that quotes Prof. Karen Cates from Northwestern’s Kellogg School of Management :

Leaders should not put likability above effectiveness. There are times when the humor and smiles need to go and a let’s-get-this-done approach is required. Cates goes further: “Even the ‘nasty boss approach’ can be really effective—but in short, small doses—to get everyone’s attention and say ‘Hey, we’ve got to make some changes around here.’ You can then create—with an earnest approach—that more likable persona as you move forward. Likability is a good thing to have in your leadership toolkit, but it shouldn’t be the biggest hammer in the box.”

Roddy Millar

I think there's a difference between 'trying to be likeable' and 'treating your colleagues with dignity and respect'.

If you're being nice to be just to liked by your team, you're probably doing it wrong. It's a bit like, back when I was teaching, teachers who wanted to be liked by the kids they taught.

The other approach is to simply treat the people around you with dignity and respect, realising that all of human life involves suffering, so let's not add to the burden through our everyday working lives.

If you want to build a ship, don’t drum up the men to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea.

Antoine de Saint-Exupéry

The above is one of my favourite quotations. We don't need to crack the whip or wield some kind of totem of hierarchical power over other people. We just need to ensure people are in the right place (physically and emotoinally), with the right things (tools, skills, and information) to get things done.


In managers are for caring, Harold Jarche points a finger at hierarchical organisations, stating that they are "what we get when we use the blunt stick of economic consequences with financial quid pro quo as the prime motivator".

Jarche wonders instead what would happen if they were structured more like communities of practice?

What would an organization look like with looser hierarchies and stronger networks? A lot more human, retrieving some of the intimacy and cooperation of tribal groups. We already have other ways of organizing work. Orchestras are not teams, and neither are jazz ensembles. There may be teamwork on a theatre production but the cast is not a team. It is more like a community of practice, with strong and weak social ties.

Harold Jarche

I think part of the problem, to be honest, is emotional intelligence, or rather the lack of it, in many organisations.

Unfortunately, the way to earn more money in organisations is to start managing people. Which is fine for the subset of people who have the skills to be able to handle this. For others, it's a frustrating experience that takes them away from doing the work.


For TED Ideas, organisational psychologist Tomas Chamorro-Premuzic asks Why do so many incompetent men become leaders? And what can we do about it? He lists three reasons why we have so many incompetent (male) leaders:

  1. Our inability to distinguish between confidence and competence
  2. Our love of charasmatic individuals
  3. The allure of “people with grandiose visions that tap into our own narcissism”

He suggests three ways to fix this. The other two are all well and good, but I just want to focus on the first solution he suggests:

The first solution is to follow the signs and look for the qualities that actually make people better leaders. There is a pathological mismatch between the attributes that seduce us in a leader and those that are needed to be an effective leader. If we want to improve the performance of our leaders, we should focus on the right traits. Instead of falling for people who are confident, narcissistic and charismatic, we should promote people because of competence, humility and integrity. Incidentally, this would also lead to a higher proportion of female than male leaders — large-scale scientific studies show that women score higher than men on measures of competence, humility and integrity. But the point is that we would significantly improve the quality of our leaders.

Tomas Chamorro-Premuzic

The best leaders I've worked for exhibited high levels of emotional intelligence. Most of them were women.

Developing emotional intelligence is difficult and goodness knows I'm no expert. What I think we perhaps need to do is to remove our corporate dependency on hierarchy. In hierarchies, emotion and trust is removed as an impediment to action.

However, in my experience, hierarchy is inherently patriarchal and competitive. It's not something that's necessarily useful in every industry in the 21st century. And hierarchies are not places that I, and people like me, particularly thrive.

Instead, I think we require trust-based ways of organising — ways that emphasis human relationships. I think these are also more conducive to human flourishing.

Right now, approaches such as sociocracy take a while to get our collective heads around as they're opposed to our "default operating system" of hierarchy. However, over time I think we'll see versions of this becoming the norm, as it becomes ever easier to co-ordinate people at a distance.


To sum up, what it means to be an effective leader is changing. Returning to the article cited above by Harold Jarche, he writes:

Hierarchical teams are what we get when we use the blunt stick of economic consequences with financial quid pro quo as the prime motivator. In a creative economy, the unity of hierarchical teams is counter-productive, as it shuts off opportunities for serendipity and innovation. In a complex and networked economy workers need more autonomy and managers should have less control.

Harold Jarche

Many people no longer live in a world of the 'permanent job' and 'career ladder'. What counts as success for them is not necessarily a steadily-increasing paycheck, but measures such as social justice or 'making a dent in the universe'. This is where hierarchy fails, and where emergent, emotionally-intelligent leaders with teams of self-organising ducks, thrive.

Friday foggings

I've been travelling this week, so I've had plenty of time to read and digest a whole range of articles. In fact, because of the luxury of that extra time, I decided to write some comments about each link, as well as the usual quotation.

Let me know what you think about this approach. I may not have the bandwidth to do it every week, but if it's useful, I'll try and prioritise it. As ever, particularly interested in hearing from supporters!


Education and Men without Work (National Affairs) — “Unlike the Great Depression, however, today's work crisis is not an unemployment crisis. Only a tiny fraction of workless American men nowadays are actually looking for employment. Instead we have witnessed a mass exodus of men from the workforce altogether. At this writing, nearly 7 million civilian non-institutionalized men between the ages of 25 and 54 are neither working nor looking for work — over four times as many as are formally unemployed.”

This article argues that the conventional wisdom, that men are out of work because of a lack of education, may be based on false assumptions. In fact, a major driver seems to be the number of men (more than 50% of working-age men, apparently) who live in child-free homes. What do these men end up doing with their time? Many of them are self-medicating with drugs and screens.


Fresh Cambridge Analytica leak ‘shows global manipulation is out of control’ (The Guardian) — “More than 100,000 documents relating to work in 68 countries that will lay bare the global infrastructure of an operation used to manipulate voters on “an industrial scale” are set to be released over the next months.”

Sadly, I think the response to these documents will be one of apathy. Due to the 24-hour news cycle and the stream of 'news' on social networks, the voting public grow tired of scandals and news stories that last for months and years.


Funding (Sussex Royals) — “The Sovereign Grant is the annual funding mechanism of the monarchy that covers the work of the Royal Family in support of HM The Queen including expenses to maintain official residences and workspaces. In this exchange, The Queen surrenders the revenue of the Crown Estate and in return, a portion of these public funds are granted to The Sovereign/The Queen for official expenditure.”

I don't think I need to restate my opinions on the Royal Family, privilege, and hierarchies / coercive power relationships of all shapes and sizes. However, as someone pointed out on Mastodon, this page by 'Harry and Meghan' is quietly subversive.


How to sell good ideas (New Statesman) — “It is true that [Malcolm] Gladwell sometimes presses his stories too militantly into the service of an overarching idea, and, at least in his books, can jam together materials too disparate to cohere (Poole referred to his “relentless montage”). The New Yorker essay, which constrains his itinerant curiosity, is where he does his finest work (the best of these are collected in 2009’s What The Dog Saw). For the most part, the work of his many imitators attests to how hard it is to do what he does. You have to be able to write lucid, propulsive prose capable of introducing complex ideas within a magnetic field of narrative. You have to leave your desk and talk to people (he never stopped being a reporter). Above all, you need to acquire an extraordinary eye for the overlooked story, the deceptively trivial incident, the minor genius. Gladwell shares the late Jonathan Miller’s belief that “it is in the negligible that the considerable is to be found”.”

A friend took me to see Gladwell when he was in Newcastle-upon-Tyne touring with 'What The Dog Saw'. Like the author of this article, I soon realised that Gladwell is selling something quite different to 'science' or 'facts'. And so long as you're OK with that, you can enjoy (as I do) his podcasts and books.


Just enough Internet: Why public service Internet should be a model of restraint (doteveryone) — “We have not yet done a good job of defining what good digital public service really looks like, of creating digital charters that match up to those of our great institutions, and it is these statements of values and ways of working – rather than any amount of shiny new technology – that will create essential building blocks for the public services of the future.”

While I attended the main MozFest weekend event, I missed the presentation and other events that happened earlier in the week. I definitely agree with the sentiment behind the transcript of this talk by Rachel Coldicutt. I'm just not sure it's specific enough to be useful in practice.


Places to go in 2020 (Marginal Revolution) — “Here is the mostly dull NYT list. Here is my personal list of recommendations for you, noting I have not been to all of the below, but I am in contact with many travelers and paw through a good deal of information."

This list by Tyler Cowen is really interesting. I haven't been to any of the places on this list, but I now really want to visit Eastern Bali and Baku in Azerbaijan.


Reasons not to scoff at ghosts, visions and near-death experiences (Aeon) — “Sure, the dangers of gullibility are evident enough in the tragedies caused by religious fanatics, medical quacks and ruthless politicians. And, granted, spiritual worldviews are not good for everybody. Faith in the ultimate benevolence of the cosmos will strike many as hopelessly irrational. Yet, a century on from James’s pragmatic philosophy and psychology of transformative experiences, it might be time to restore a balanced perspective, to acknowledge the damage that has been caused by stigma, misdiagnoses and mis- or overmedication of individuals reporting ‘weird’ experiences. One can be personally skeptical of the ultimate validity of mystical beliefs and leave properly theological questions strictly aside, yet still investigate the salutary and prophylactic potential of these phenomena.”

I'd happily read a full-length book on this subject, as it's a fascinating area. The tension between knowing that much/all of the phenomena is reducible to materiality and mechanics may explain what's going on, but it doesn't explain it away...


Surveillance Tech Is an Open Secret at CES 2020 (OneZero) — “Lowe offered one explanation for why these companies feel so comfortable marketing surveillance tech: He says that the genie can’t be put back in the bottle, so barring federal regulation that bans certain implementations, it’s increasingly likely that some company will fill the surveillance market. In other words, if Google isn’t going to work with the cops, Amazon will. And even if Amazon decides not to, smaller companies out of the spotlight still will.”

I suppose it should come as no surprise that, in this day and age, companies like Cyberlink, previously known for their PowerDVD software, have moved into the very profitable world of surveillance capitalism. What's going to stop its inexorable rise? I can only think of government regulation (with teeth).


‘Techlash’ Hits College Campuses (New York Times) — “Some recent graduates are taking their technical skills to smaller social impact groups instead of the biggest firms. Ms. Dogru said that some of her peers are pursuing jobs at start-ups focused on health, education and privacy. Ms. Harbour said Berkeley offers a networking event called Tech for Good, where alumni from purpose-driven groups like Code for America and Khan Academy share career opportunities.”

I'm not sure this is currently as big a 'movement' as suggested in the article, but I'm glad the wind is blowing in this direction. As with other ethically-dubious industries, companies involved in surveillance capitalism will have to pay people extraordinarily well to put aside their moral scruples.


Tradition is Smarter Than You Are (The Scholar's Stage) — “To extract resources from a population the state must be able to understand that population. The state needs to make the people and things it rules legible to agents of the government. Legibility means uniformity. States dream up uniform weights and measures, impress national languages and ID numbers on their people, and divvy the country up into land plots and administrative districts, all to make the realm legible to the powers that be. The problem is that not all important things can be made legible. Much of what makes a society successful is knowledge of the tacit sort: rarely articulated, messy, and from the outside looking in, purposeless. These are the first things lost in the quest for legibility. Traditions, small cultural differences, odd and distinctive lifeways... are all swept aside by a rationalizing state that preserves (or in many cases, imposes) only what it can be understood and manipulated from the 2,000 foot view. The result... are many of the greatest catastrophes of human history.”

One of the books that's been on my 'to-read' list for a while is 'Seeing Like a State', written by James C. Scott and referenced in this article. I'm no believer in tradition for the sake of it but, I have to say, that a lot of the superstitions of my maternal grandmother, and a lot of the rituals that come with religion are often very practical in nature.


Image by Michael Schlegel (via kottke.org)

Microcast #083 - Ambiguous in Kuwait City

Some reflections on my digital literacies pre-conference workshop yesterday for AMICAL.

Show notes

Given things as they are, how shall one individual live?

...asked Annie Dillard. It's a good question.

Richard D. Bartlett, who I support via Patreon and who is better known as richdecibels, has started a newsletter. The process of signing up for it reminded me of a post he wrote last year entitled Hierarchy Is Not The Problem...

Is it a circle or a cone?

Ten years ago, in my first foray into senior management, I was told by a consultant to the newly-installed Principal that "he's very hierarchical". She meant it in a good way, but I almost quit on the spot. To me, that's shorthand for a very dictatorial style of management.

So Bartlett's post, which I think I've mentioned before, is one I keep coming back to. He says that:

I don’t care about hierarchy. It’s just a shape. I care about power dynamics.

[...]

These days I have mostly removed “non-hierarchical” from my vocabulary. I still haven’t found a great replacement, but for now I say “decentralised”. But again, it’s not the shape that’s interesting, it’s the power dynamics.

Richard D. Bartlett

That's quite a challenging notion for me, having been in situations within very hierarchical organisations where people try and put me in a box, tie me to a particular role, or otherwise indicate I should stick to my own lane.

It's something I'm continue to process. I'm not sure whether Bartlett's correct. It's a great argument, and I've certainly seen some great organisations structured by way of what I'd call the "default operating system" of hierarchy.

Perhaps the thing is that it's easy to show the difference between the way an organisation is structured (its nodes) as opposed to the the difference between the way those nodes connect with one another. Interactions between other human beings are complicated, and difficult to put in a neat diagram.


Recently, Sam Altman, President of the famed startup accelerator Y Combinator, wrote a Twitter thread which he entitled How To Be Successful At Your Career. It's what people do instead of blogging these days, it would appear.

One tweet in the thread really stuck out to me, especially in this context of hierarchy and coercive power relationships:

The most successful people (judged by history, not money) continually look for the most important thing they are able to work on, and that’s what they do. They do not get trapped in local maxima, and they do not deceive themselves if they find something more important.

Sam Altman

In other words, what you're attempting to do should transcend the organisation you currently work for and the people with whom you currently work. I believe Steve Jobs called this "making a dent in the universe". It's unlikely to happen if you're playing politics within your organisation, if you're abusing a position of power, or you're spending all day in meetings.


Fred Wilson, a VC, says he often gets asked what to work on. This is understandable, given it's his job to keep his finger on the pulse of companies in which he can invest. Wilson sums up by saying:

You must work on something that inspires you and others, you must work on something with a significant impact, and you must do it in a way that makes getting where you want to go as easy as possible and keeps you there as long as possible.

Fred Wilson

I think this is a good mantra, and I appreciate that he doesn't just consider 'impact' to be 'financial impact', but also "how it changes the way people think and how they react to your product or service or innovation".


Context is really important. It's the reason why there is no one-size-fits-all approach to organisational structures, and why, unless you're the founder of the organisation, you will never be 100% aligned with everything it does. And even then, if your organisation grows to make an impact, there will be a difference between you and the organisation you helped to gestate.

All we can do, at any given point, is to weigh up where we are, using principles such as Fred Wilson's:

  1. Am I working on something that inspires me (and others)?
  2. Am I working on something with a significant impact?
  3. Am I working in a way that makes getting where I want to go as easy as possible (and keeps me there as long as possible)?

As Altman writes, that's likely to be in a place that doesn't play politics and, to Bartlett's point, it's important to pay very close attention to power dynamics. In short, it's important to ask ourselves regularly, "Am I best positioned to make the particular dent I've decided to make in the universe?"

Friday flurries

It's been a busy week, but I've still found time to unearth these gems...

  • The Dark Psychology of Social Networks (The Atlantic) — “The philosophers Justin Tosi and Brandon Warmke have proposed the useful phrase moral grandstanding to describe what happens when people use moral talk to enhance their prestige in a public forum. Like a succession of orators speaking to a skeptical audience, each person strives to outdo previous speakers, leading to some common patterns. Grandstanders tend to “trump up moral charges, pile on in cases of public shaming, announce that anyone who disagrees with them is obviously wrong, or exaggerate emotional displays.” Nuance and truth are casualties in this competition to gain the approval of the audience. Grandstanders scrutinize every word spoken by their opponents—and sometimes even their friends—for the potential to evoke public outrage. Context collapses. The speaker’s intent is ignored.”
  • Live Your Best Life—On and Off Your Phone—in 2020 (WIRED) — “It’s your devices versus your best life. Just in time for a new decade, though, several fresh books offer a more measured approach to living in the age of technology. These are not self-help books, or even books that confront our relationship with technology head-on. Instead, they examine the realities of a tech-saturated world and offer a few simple ideas for rewriting bad habits, reviewing the devices we actually need, and relearning how to listen amid all the noise.”
  • People Who Are Obsessed With Success and Prestige (Bennett Notes) — “What does it look like to be obsessed with success and prestige? It probably looks a lot like me at the moment. A guy who starts many endeavors and side projects just because he wants to be known as the creator of something. This a guy who wants to build another social app, not because he has an unique problem that’s unaddressed, but because he wants to be the cool tech entrepreneur who everyone admires and envies. This is a guy who probably doesn’t care for much of what he does, but continues to do so for the eventual social validation of society and his peers.”
  • The Lesson to Unlearn (Paul Graham) — “Merely talking explicitly about this phenomenon is likely to make things better, because much of its power comes from the fact that we take it for granted. After you've noticed it, it seems the elephant in the room, but it's a pretty well camouflaged elephant. The phenomenon is so old, and so pervasive. And it's simply the result of neglect. No one meant things to be this way. This is just what happens when you combine learning with grades, competition, and the naive assumption of unhackability.”
  • The End of the Beginning (Stratechery) — “[In consumer-focused startups] few companies are pure “tech” companies seeking to disrupt the dominant cloud and mobile players; rather, they take their presence as an assumption, and seek to transform society in ways that were previously impossible when computing was a destination, not a given. That is exactly what happened with the automobile: its existence stopped being interesting in its own right, while the implications of its existence changed everything.”
  • Populism Is Morphing in Insidious Ways (The Atlantic) — “If the 2010s were the years in which predominantly far-right, populist parties permeated the political mainstream, then the 2020s will be when voters “are going to see the consequences of that,” Daphne Halikiopoulou, an associate professor of comparative politics at the University of Reading, in England, told me.”
  • It’s the network, stupid: Study offers fresh insight into why we’re so divided (Ars Technica) — “There is no easy answer when it comes to implementing structural changes that encourage diversity, but today's extreme polarization need not become a permanent characteristic of our cultural landscape. "I think we need to adopt new skills as we are transitioning into a more complex, more globalized, and more interconnected world, where each of us can affect far-away parts of the world with our actions," said Galesic.”
  • Memorizing Lists of Cognitive Biases Won't Help (Hapgood) — “But if you want to change your own behavior, memorizing long lists of biases isn’t going to help you. If anything it’s likely to just become another weapon in your motivated reasoning arsenal. You can literally read the list of biases to see why reading the list won’t work.”
  • How to get more done by doing less (Fast Company) — “Sometimes, the secret to doing more isn’t optimizing every minute, but finding the things you can cull from your schedule. That way, you not only reduce the time you spend on non-essential tasks, but you can also find more time for yourself.”

Image via xkcd

Microcast #082 - Nodenoggin

This week, I've been delighted to be able to catch up with Adam Procter, academic, games designer, open advocate, and long-time supporter of Thought Shrapnel.

We discussed everything from the IndieWeb to his PhD project, with relevant links below!

Show notes

Most human beings have an almost infinite capacity for taking things for granted

So said Aldous Huxley. Recently, I discovered a episode of the podcast The Science of Success in which Dan Carlin was interviewed. Now Dan is the host of one of my favourite podcasts, Hardcore History as well as one he's recently discontinued called Common Sense.

The reason the latter is on 'indefinite hiatus' was discussed on The Science of Success podcast. Dan feels that, after 30 years as a journalist, if he can't get a grip on the current information landscape, then who can? It's shaken him up a little.

One of the quotations he just gently lobbed into the conversation was from John Stuart Mill, who at one time or another was accused by someone of being 'inconsistent' in his views. Mill replied:

When the facts change, I change my mind. What do you do, sir?

John Stuart Mill

Now whether or not Mill said those exact words, the sentiment nevertheless stands. I reckon human beings have always made up their minds first and then chosen 'facts' to support their opinions. These days, I just think that it's easier than ever to find 'news' outlets and people sharing social media posts to support your worldview. It's as simple as that.


Last week I watched a stand-up comedy routine by Kevin Bridges on BBC iPlayer as part of his 2018 tour. As a Glaswegian, he made the (hilarious) analogy of social media as being like going into a pub.

(As an aside, this is interesting, as a decade ago people would often use the analogy of using social media as being like going to an café. The idea was that you could overhear, and perhaps join in with, interesting conversations that you hear. No-one uses that analogy any more.)

Bridges pointed out that if you entered a pub, sat down for a quiet pint, and the person next to you was trying to flog you Herbalife products, constantly talking about how #blessed they felt, or talking ambiguously for the sake of attention, you'd probably find another pub.

He was doing it for laughs, but I think he was also making a serious point. Online, we tolerate people ranting on and generally being obnoxious in ways we would never do offline.

The underlying problem of course is that any platform that takes some segment of the real world and brings it into software will also bring in all that segment's problems. Amazon took products and so it has to deal with bad and fake products (whereas one might say that Facebook took people, and so has bad and fake people).

Benedict Evans

I met Clay Shirky at an event last month, which kind of blew my mind given that it was me speaking at it rather than him. After introducing myself, we spoke for a few minutes about everything from his choice of laptop to what he's been working on recently. Curiously, he's not writing a book at the moment. After a couple of very well-received books (Here Comes Everybody and Cognitive Surplus) Shirky has actually only published a slightly obscure book about Chinese smartphone manufacturing since 2010.

While I didn't have time to dig into things there and then, and it would been a bit presumptuous of me to do so, it feels to me like Shirky may have 'walked back' some of his pre-2010 thoughts. This doesn't surprise me at all, given that many of the rest of us have, too. For example, in 2014 he published a Medium article explaining why he banned his students from using laptops in lectures. Such blog posts and news articles are common these days, but it felt like was one of the first.


The last decade from 2010 to 2019, which Audrey Watters has done a great job of eviscerating, was, shall we say, somewhat problematic. The good news is that we connected 4.5 billion people to the internet. The bad news is that we didn't really harness that for much good. So we went from people sharing pictures of cats, to people sharing pictures of cats and destroying western democracy.

Other than the 'bad and fake people' problem cited by Ben Evans above, another big problem was the rise of surveillance capitalism. In a similar way to climate change, this has been repackaged as a series of individual failures on the part of end users. But, as Lindsey Barrett explains for Fast Company, it's not really our fault at all:

In some ways, the tendency to blame individuals simply reflects the mistakes of our existing privacy laws, which are built on a vision of privacy choices that generally considers the use of technology to be a purely rational decision, unconstrained by practical limitations such as the circumstances of the user or human fallibility. These laws are guided by the idea that providing people with information about data collection practices in a boilerplate policy statement is a sufficient safeguard. If people don’t like the practices described, they don’t have to use the service.

Lindsey Barrett

The problem is that we have monopolistic practices in the digital world. Fast Company also reports the four most downloaded apps of the 2010s were all owned by Facebook:

I don't actually think people really understand that their data from WhatsApp and Instagram is being hoovered up by Facebook. I don't then think they understand what Facebook then do with that data. I tried to lift the veil on this a little bit at the event where I met Clay Shirky. I know at least one person who immediately deleted their Facebook account as a result of it. But I suspect everyone else will just keep on keeping on. And yes, I have been banging my drum about this for quite a while now. I'll continue to do so.

The truth is, and this is something I'll be focusing on in upcoming workshops I'm running on digital literacies, that to be an 'informed citizen' these days means reading things like the EFF's report into the current state of corporate surveillance. It means deleting accounts as a result. It means slowing down, taking time, and reading stuff before sharing it on platforms that you know care for the many, not the few. It means actually caring about this stuff.

All of this might just look and feel like a series of preferences. I prefer decentralised social networks and you prefer Facebook. Or I like to use Signal and you like WhatsApp. But it's more than that. It's a whole lot more than that. Democracy as we know it is at stake here.


As Prof. Scott Galloway has discussed from an American point of view, we're living in times of increasing inequality. The tools we're using exacerbate that inequality. All of a sudden you have to be amazing at your job to even be able to have a decent quality of life:

The biggest losers of the decade are the unremarkables. Our society used to give remarkable opportunities to unremarkable kids and young adults. Some of the crowding out of unremarkable white males, including myself, is a good thing. More women are going to college, and remarkable kids from low-income neighborhoods get opportunities. But a middle-class kid who doesn’t learn to code Python or speak Mandarin can soon find she is not “tracking” and can’t catch up.

Prof. Scott Galloway

I shared an article last Friday, about how you shouldn't have to be good at your job. The whole point of society is that we look after one another, not compete with one another to see which of us can 'extract the most value' and pile up more money than he or she can ever hope to spend. Yes, it would be nice if everyone was awesome at all they did, but the optimisation of everything isn't the point of human existence.

So once we come down the stack from social networks, to surveillance capitalism, to economic and markets eating the world we find the real problem behind all of this: decision-making. We've sacrificed stability for speed, and seem to be increasingly happy with dictator-like behaviour in both our public institutions and corporate lives.

Dictatorships can be more efficient than democracies because they don’t have to get many people on board to make a decision. Democracies, by contrast, are more robust, but at the cost of efficiency.

Taylor Pearson

A selectorate, according to Pearson, "represents the number of people who have influence in a government, and thus the degree to which power is distributed". Aside from the fact that dictatorships tend to be corrupt and oppressive, they're just not a good idea in terms of decision-making:

Said another way, much of what appears efficient in the short term may not be efficient but hiding risk somewhere, creating the potential for a blow-up. A large selectorate tends to appear to be working less efficiently in the short term, but can be more robust in the long term, making it more efficient in the long term as well. It is a story of the Tortoise and the Hare: slow and steady may lose the first leg, but win the race.

Taylor Pearson

I don't think we should be optimising human beings for their role in markets. I think we should be optimising markets (if in fact we need them) for their role in human flourishing. The best way of doing that is to ensure that we distribute power and decision-making well.


So it might seem that my continual ragging on Facebook (in particular) is a small thing in the bigger picture. But it's actually part of the whole deal. When we have super-powerful individuals whose companies have the ability to surveil us at will; who then share that data to corrupt regimes; who in turn reinforce the worst parts of the status quo; then I think we have a problem.

This year I've made a vow to be more radical. To speak my mind even more, and truth to power, especially when it's inconvenient. I hope you'll join me ✊

Friday fertilisations

I've read so much stuff over the past couple of months that it's been a real job whittling down these links. In the end I gave up and shared a few more than usual!

  • You Shouldn’t Have to Be Good at Your Job (GEN) — "This is how the 1% justifies itself. They are not simply the best in terms of income, but in terms of humanity itself. They’re the people who get invited into the escape pods when the mega-asteroid is about to hit. They don’t want a fucking thing to do with the rest of the population and, in fact, they have exploited global economic models to suss out who deserves to be among them and who deserves to be obsolete. And, thanks to lax governments far and wide, they’re free to practice their own mass experiments in forced Darwinism. You currently have the privilege of witnessing a worm’s-eye view of this great culling. Fun, isn’t it?"
  • We've spent the decade letting our tech define us. It's out of control (The Guardian) — "There is a way out, but it will mean abandoning our fear and contempt for those we have become convinced are our enemies. No one is in charge of this, and no amount of social science or monetary policy can correct for what is ultimately a spiritual deficit. We have surrendered to digital platforms that look at human individuality and variance as “noise” to be corrected, rather than signal to be cherished. Our leading technologists increasingly see human beings as a problem, and technology as the solution – and they use our behavior on their platforms as evidence of our essentially flawed nature."
  • How headphones are changing the sound of music (Quartz) — "Another way headphones are changing music is in the production of bass-heavy music. Harding explains that on small speakers, like headphones or those in a laptop, low frequencies are harder to hear than when blasted from the big speakers you might encounter at a concert venue or club. If you ever wondered why the bass feels so powerful when you are out dancing, that’s why. In order for the bass to be heard well on headphones, music producers have to boost bass frequencies in the higher range, the part of the sound spectrum that small speakers handle well."
  • The False Promise of Morning Routines (The Atlantic) — "Goat milk or no goat milk, the move toward ritualized morning self-care can seem like merely a palliative attempt to improve work-life balance.It makes sense to wake up 30 minutes earlier than usual because you want to fit in some yoga, an activity that you enjoy. But something sinister seems to be going on if you feel that you have to wake up 30 minutes earlier than usual to improve your well-being, so that you can also work 60 hours a week, cook dinner, run errands, and spend time with your family."
  • Giant surveillance balloons are lurking at the edge of space (Ars Technica) — "The idea of a constellation of stratospheric balloons isn’t new—the US military floated the idea back in the ’90s—but technology has finally matured to the point that they’re actually possible. World View’s December launch marks the first time the company has had more than one balloon in the air at a time, if only for a few days. By the time you’re reading this, its other stratollite will have returned to the surface under a steerable parachute after nearly seven weeks in the stratosphere."
  • The Unexpected Philosophy Icelanders Live By (BBC Travel) — "Maybe it makes sense, then, that in a place where people were – and still are – so often at the mercy of the weather, the land and the island’s unique geological forces, they’ve learned to give up control, leave things to fate and hope for the best. For these stoic and even-tempered Icelanders, þetta reddast is less a starry-eyed refusal to deal with problems and more an admission that sometimes you must make the best of the hand you’ve been dealt."
  • What Happens When Your Career Becomes Your Whole Identity (HBR) — "While identifying closely with your career isn’t necessarily bad, it makes you vulnerable to a painful identity crisis if you burn out, get laid off, or retire. Individuals in these situations frequently suffer anxiety, depression, and despair. By claiming back some time for yourself and diversifying your activities and relationships, you can build a more balanced and robust identity in line with your values."
  • Having fun is a virtue, not a guilty pleasure (Quartz) — "There are also, though, many high-status workers who can easily afford to take a break, but opt instead to toil relentlessly. Such widespread workaholism in part reflects the misguided notion that having fun is somehow an indulgence, an act of absconding from proper respectable behavior, rather than embracement of life. "
  • It’s Time to Get Personal (Laura Kalbag) — "As designers and developers, it’s easy to accept the status quo. The big tech platforms already exist and are easy to use. There are so many decisions to be made as part of our work, we tend to just go with what’s popular and convenient. But those little decisions can have a big impact, especially on the people using what we build."
  • The 100 Worst Ed-Tech Debacles of the Decade (Hack Education) — "Oh yes, I’m sure you can come up with some rousing successes and some triumphant moments that made you thrilled about the 2010s and that give you hope for “the future of education.” Good for you. But that’s not my job. (And honestly, it’s probably not your job either.)"
  • Why so many Japanese children refuse to go to school (BBC News) — "Many schools in Japan control every aspect of their pupils' appearance, forcing pupils to dye their brown hair black, or not allowing pupils to wear tights or coats, even in cold weather. In some cases they even decide on the colour of pupils' underwear. "
  • The real scam of ‘influencer’ (Seth Godin) — "And a bigger part is that the things you need to do to be popular (the only metric the platforms share) aren’t the things you’d be doing if you were trying to be effective, or grounded, or proud of the work you’re doing."

Image via Kottke.org

Microcast #081 - Anarchy, Federation, and the IndieWeb

Happy New Year! It's good to be back.

This week's microcast answers a question from John Johnston about federation and the IndieWeb. I also discuss anarchism and left-libertarianism, for good measure.

Show notes

Quick update!

For approximately the last decade, I've had an annual hiatus from writing and social media, and focused on inputs rather than outputs. Sometimes that's lasted a month, sometimes two.

This year, I'm going to be sending out weekly newsletters (only) during November, and then nothing at all in December. As a result, there won't be any more posts on this site until January 2020.

I'd like to take this opportunity to thank everyone who has commented on my work this year, either publicly or privately. A special thanks goes to those who back Thought Shrapnel via Patreon. I really do appreciate your support!

Friday fablings

I couldn't ignore these things this week:

  1. The 2010s Broke Our Sense Of Time (BuzzFeed News) — "Everything good, bad, and complicated flows through our phones, and for those not living some hippie Walden trip, we operate inside a technological experience that moves forward and back, and pulls you with it.... You can find yourself wondering why you’re seeing this now — or knowing too well why it is so. You can feel amazing and awful — exult in and be repelled by life — in the space of seconds. The thing you must say, the thing you’ve been waiting for — it’s always there, pulling you back under again and again and again. Who can remember anything anymore?"
  2. Telling Gareth Bale that Johnson is PM took away banterpocalypse’s sole survivor (The Guardian) — "The point is: it is more than theoretically conceivable that Johnson could be the shortest-serving prime minister in 100 years, and thus conceivable that Gareth Bale could have remained ignorant of his tenure in its entirety. Before there were smartphones and so on, big news events that happened while you were on holiday felt like they hadn’t truly happened. Clearly they HAD happened, in some philosophical sense or other, but because you hadn’t experienced them unfolding live on the nightly news, they never felt properly real."
  3. Dreaming is Free (Learning Nuggets) — "When I was asked to keynote the Fleming College Fall Teaching & Learning Day, I thought it’d be a great chance to heed some advice from Blondie (Dreaming is free, after all) and drop a bunch of ideas for digital learning initiatives that we could do and see which ones that we can breath some life into. Each of these ideas are inspired by some open, networked and/or connectivist learning experiences that are already out there."
  4. Omniviolence Is Coming and the World Isn’t Ready (Nautilus) — "The trouble is that if anyone anywhere can attack anyone anywhere else, then states will become—and are becoming—unable to satisfy their primary duty as referee. It’s a trend toward anarchy, “the war of all against all,” as Hobbes put it—in other words a condition of everyone living in constant fear of being harmed by their neighbors."
  5. We never paid for Journalism (iDiallo) — "At the end of the day, the price that you and I pay, whether it is for the print copy or digital, it is only a very small part of the revenue. The price paid for the printed copy was by no means sustaining the newspaper business. It was advertisers all along. And they paid the price for the privilege of having as many eyeballs the newspaper could expose their ads to."
  6. Crossing Divides: How a social network could save democracy from deadlock (BBC News) — "This was completely different from simply asking them to vote via an app. vTaiwan gave participants the agenda-setting power not just to determine the answer, but also define the question. And it didn't aim to find a majority of one side over another, but achieve consensus across them."
  7. Github removes Tsunami Democràtic’s APK after a takedown order from Spain (TechCrunch) — "While the Tsunami Democràtic app could be accused of encouraging disruption, the charge of “terrorism” is clearly overblown. Unless your definition of terrorism extends to harnessing the power of peaceful civil resistance to generate momentum for political change."
  8. You Choose (inessential) — "You choose the web you want. But you have to do the work. A lot of people are doing the work. You could keep telling them, discouragingly, that what they’re doing is dead. Or you could join in the fun."
  9. Agency Is Key (gapingvoid) — "People don’t innovate (“Thrive” mode) when they’re scared. Instead, they keep their heads down (“Survive” mode)."

Image by False Knees

Microcast #080 - Redecentralize and MozFest

Friday facilitations

This week, je presente...

  1. We Have No Reason to Believe 5G Is Safe (Scientific American) — "The latest cellular technology, 5G, will employ millimeter waves for the first time in addition to microwaves that have been in use for older cellular technologies, 2G through 4G. Given limited reach, 5G will require cell antennas every 100 to 200 meters, exposing many people to millimeter wave radiation... [which are] absorbed within a few millimeters of human skin and in the surface layers of the cornea. Short-term exposure can have adverse physiological effects in the peripheral nervous system, the immune system and the cardiovascular system."
  2. Situated degree pathways (The Ed Techie) — "[T]he Trukese navigator “begins with an objective rather than a plan. He sets off toward the objective and responds to conditions as they arise in an ad hoc fashion. He utilizes information provided by the wind, the waves, the tide and current, the fauna, the stars, the clouds, the sound of the water on the side of the boat, and he steers accordingly.” This is in contrast to the European navigator who plots a course “and he carries out his voyage by relating his every move to that plan. His effort throughout his voyage is directed to remaining ‘on course’."
  3. on rms / necessary but not sufficient (p1k3) — "To the extent that free software was about wanting the freedom to hack and freely exchange the fruits of your hacking, this hasn’t gone so badly. It could be better, but I remember the 1990s pretty well and I can tell you that much of the stuff trivially at my disposal now would have blown my tiny mind back then. Sometimes I kind of snap to awareness in the middle of installing some package or including some library in a software project and this rush of gratitude comes over me."
  4. Screen time is good for you—maybe (MIT Technology Review) — "Przybylski admitted there are some drawbacks to his team’s study: demographic effects, like socioeconomics, are tied to psychological well-being, and he said his team is working to differentiate those effects—along with the self-selection bias introduced when kids and their caregivers report their own screen use. He also said he was working to figure out whether a certain type of screen use was more beneficial than others."
  5. This Map Lets You Plug in Your Address to See How It’s Changed Over the Past 750 Million Years (Smithsonian Magazine) — "Users can input a specific address or more generalized region, such as a state or country, and then choose a date ranging from zero to 750 million years ago. Currently, the map offers 26 timeline options, traveling back from the present to the Cryogenian Period at intervals of 15 to 150 million years."
  6. Understanding extinction — humanity has destroyed half the life on Earth (CBC) — "One of the most significant ways we've reduced the biomass on the planet is by altering the kind of life our planet supports. One huge decrease and shift was due to the deforestation that's occurred with our increasing reliance on agriculture. Forests represent more living material than fields of wheat or soybeans."
  7. Honks vs. Quacks: A Long Chat With the Developers of 'Untitled Goose Game' (Vice) — "[L]ike all creative work, this game was made through a series of political decisions. Even if this doesn’t explicitly manifest in the text of the game, there are a bunch of ambient traces of our politics evident throughout it: this is why there are no cops in the game, and why there’s no crown on the postbox."
  8. What is the Zeroth World, and how can we use it? (Bryan Alexander) — "[T]he idea of a zeroth world is also a critique. The first world idea is inherently self-congratulatory. In response, zeroth sets the first in some shade, causing us to see its flaws and limitations. Like postmodern to modern, or Internet2 to the rest of the internet, it’s a way of helping us move past the status quo."
  9. It’s not the claim, it’s the frame (Hapgood) — "[A] news-reading strategy where one has to check every fact of a source because the source itself cannot be trusted is neither efficient nor effective. Disinformation is not usually distributed as an entire page of lies.... Even where people fabricate issues, they usually place the lies in a bed of truth."

Image of hugelkultur bed via Sid

We don’t receive wisdom; we must discover it for ourselves after a journey that no one can take us on or spare us

So said Marcel Proust, that famous connoisseur of les petites madeleines. While I don't share his effete view of the world, I do like French cakes and definitely agree with his sentiments on wisdom.

Earlier this week, Eylan Ezekiel shared this Nesta Landscape of innovation approaches with our Slack channel. It's what I would call 'slidebait' — carefully crafted to fit onto slide decks in keynotes around the world. It's a smart move because it gets people talking about your organisation.

Nesta's Landscape of innovation approaches
Nesta's Landscape of innovation approaches

In my opinion, how these things are made is more interesting than the end result. There are inevitably value judgements when creating anything like this, and, because Nesta have set it out as overlapping 'spaces', the most obvious takeaway from the above diagram is that those innovation approaches sitting within three overlapping spaces are the 'most valuable' or 'most impactful'. Is that true?

A previous post on this topic from the Nesta blog explains:

Although this map is neither exhaustive nor definitive – and at some points it may seem perhaps a little arbitrary, personal choice and preference – we have tried to provide an overview of both commonly used and emerging innovation approaches.

Bas Leurs (formerly of nesta)

When you're working for a well-respected organisation, you have to be really careful, because people can take what you produce as some sort of Gospel Truth. No matter how many caveats you add, people confuse the map with the territory.

I have some experience with creating a 'map' for a given area, as I was Mozilla's Web Literacy Lead from 2013 to 2015. During that time, I worked with the community to take the Web Literacy Standard Map from v0.1 to v1.5.

Digital literacies of various types are something I've been paying attention to for around 15 years now. And, let me tell, you, I've seen some pretty bad 'maps' and 'frameworks'.

For example, here's a slide deck for a presentation I did for a European Commission Summer School last year, in which I attempted to take the audience on a journey to decide whether a particular example I showed them was any good:

If you have a look at Slide 14 onwards, you'll see that the point I was trying to make is that you have no way of knowing whether or not a shiny, good-looking map is any good. The organisation who produced it didn't 'show their work', so you have zero insight into its creation and the decisions taken in its creation. Did their intern knock it up on a short deadline? We'll never know.

The problem with many think tanks and 'innovation' organisations is that they move on too quickly to the next thing. Instead of sitting with something and let it mature and flourish, as soon as the next bit of funding comes in, they're off like a dog chasing a shiny car. I'm not sure that's how innovation works.

Before Mozilla, I worked at Jisc, which at the time funded innovation programmes on behalf of the UK government and disseminated the outcomes. I remember a very simple overview from Jisc's Sustaining and Embedding Innovations project that focused on three stages of innovation:

Invention                     
This is about the generation of new ideas e.g. new ways of teaching and learning or new ICT solutions.

Early Innovation
This is all about the early practical application of new inventions, often focused in specific areas e.g. a subject discipline or speciality such as distance learning or work-based learning.

Systemic Innovation
This is where an institution, for example, will aim to embed an innovation institutionally. 

Jisc

The problem with many maps and frameworks, especially around digital skills and innovation, is that they remove any room for ambiguity. So, in an attempt not to come across as vague, they instead become 'dead metaphors'.

Continuum of ambiguity
Continuum of Ambiguity

I don't think I've ever seen an example where, without any contextualisation, an individual or organisation has taken something 'off the shelf' and applied it to achieve uniformly fantastic results. That's not how these things work.

Humans are complex organisms; we're not machines. For a given input you can't expect the same output. We're not lossless replicators.

So although it takes time, effort, and resources, you've got to put in the hard yards to see an innovation through all three of those stages outlined by Jisc. Although the temptation is to nail things down initially, the opposite is actually the best way forward. Take people on a journey and get them to invest in what's at stake. Embrace the ambiguity.

I've written more about this in a post I wrote about a 5-step process for creating a sustainable digital literacies curriculum. It's something I'll be thinking about more as I reboot my consultancy work (through our co-op) for 2020!

For now, though, remember this wonderful African proverb:

"If you want to go fast, go alone. If you want to go far, go together." (African proverb)
CC BY-ND Bryan Mathers

Microcast #079 - information environments

This week's microcast is about information environments, the difference between technical and 'people' skills, and sharing your experience.

Show notes

Friday flowerings

Did you see these things this week?

  • Happy 25th year, blogging. You’ve grown up, but social media is still having a brawl (The Guardian) — "The furore over social media and its impact on democracy has obscured the fact that the blogosphere not only continues to exist, but also to fulfil many of the functions of a functioning public sphere. And it’s massive. One source, for example, estimates that more than 409 million people view more than 20bn blog pages each month and that users post 70m new posts and 77m new comments each month. Another source claims that of the 1.7 bn websites in the world, about 500m are blogs. And Wordpress.com alone hosts blogs in 120 languages, 71% of them in English."
  • Emmanuel Macron Wants to Scan Your Face (The Washington Post) — "President Emmanuel Macron’s administration is set to be the first in Europe to use facial recognition when providing citizens with a secure digital identity for accessing more than 500 public services online... The roll-out is tainted by opposition from France’s data regulator, which argues the electronic ID breaches European Union rules on consent – one of the building blocks of the bloc’s General Data Protection Regulation laws – by forcing everyone signing up to the service to use the facial recognition, whether they like it or not."
  • This is your phone on feminism (The Conversationalist) — "Our devices are basically gaslighting us. They tell us they work for and care about us, and if we just treat them right then we can learn to trust them. But all the evidence shows the opposite is true. This cognitive dissonance confuses and paralyses us. And look around. Everyone has a smartphone. So it’s probably not so bad, and anyway, that’s just how things work. Right?"
  • Google’s auto-delete tools are practically worthless for privacy (Fast Company) — "In reality, these auto-delete tools accomplish little for users, even as they generate positive PR for Google. Experts say that by the time three months rolls around, Google has already extracted nearly all the potential value from users’ data, and from an advertising standpoint, data becomes practically worthless when it’s more than a few months old."
  • Audrey Watters (Uses This) — "For me, the ideal set-up is much less about the hardware or software I am using. It's about the ideas that I'm thinking through and whether or not I can sort them out and shape them up in ways that make for a good piece of writing. Ideally, that does require some comfort -- a space for sustained concentration. (I know better than to require an ideal set up in order to write. I'd never get anything done.)"
  • Computer Files Are Going Extinct (OneZero) — "Files are skeuomorphic. That’s a fancy word that just means they’re a digital concept that mirrors a physical item. A Word document, for example, is like a piece of paper, sitting on your desk(top). A JPEG is like a painting, and so on. They each have a little icon that looks like the physical thing they represent. A pile of paper, a picture frame, a manila folder. It’s kind of charming really."
  • Why Technologists Fail to Think of Moderation as a Virtue and Other Stories About AI (The LA Review of Books) — "Speculative fiction about AI can move us to think outside the well-trodden clichés — especially when it considers how technologies concretely impact human lives — through the influence of supersized mediators, like governments and corporations."
  • Inside Mozilla’s 18-month effort to market without Facebook (Digiday) — "The decision to focus on data privacy in marketing the Mozilla brand came from research conducted by the company four years ago into the rise of consumers who make values-based decisions on not only what they purchase but where they spend their time."
  • Core human values not eyeballs (Cubic Garden) — "Theres so much more to do, but the aims are high and important for not just the BBC, but all public service entities around the world. Measuring the impact and quality on peoples lives beyond the shallow meaningless metrics for public service is critical."

Image: The why is often invisible via Jessica Hagy's Indexed

Microcast #078 — Values-based organisations

I've decided to post these microcasts, which I previously made available only through Patreon, here instead.

Microcasts focus on what I've been up to and thinking about, and also provide a way to answer questions from supporters and other readers/listeners!

This microcast covers ethics in decision-making for technology companies and (related!) some recent purchases I've made.

Show notes

I am not fond of expecting catastrophes, but there are cracks in the universe

So said Sydney Smith. Let's talk about surveillance. Let's talk about surveillance capitalism and surveillance humanitarianism. But first, let's talk about machine learning and algorithms; in other words, let's talk about what happens after all of that data is collected.

Writing in The Guardian, Sarah Marsh investigates local councils using "automated guidance systems" in an attempt to save money.

The systems are being deployed to provide automated guidance on benefit claims, prevent child abuse and allocate school places. But concerns have been raised about privacy and data security, the ability of council officials to understand how some of the systems work, and the difficulty for citizens in challenging automated decisions.

Sarah Marsh

The trouble is, they're not particularly effective:

It has emerged North Tyneside council has dropped TransUnion, whose system it used to check housing and council tax benefit claims. Welfare payments to an unknown number of people were wrongly delayed when the computer’s “predictive analytics” erroneously identified low-risk claims as high risk

Meanwhile, Hackney council in east London has dropped Xantura, another company, from a project to predict child abuse and intervene before it happens, saying it did not deliver the expected benefits. And Sunderland city council has not renewed a £4.5m data analytics contract for an “intelligence hub” provided by Palantir.

Sarah Marsh

When I was at Mozilla there were a number of colleagues there who had worked on the OFA (Obama For America) campaign. I remember one of them, a DevOps guy, expressing his concern that the infrastructure being built was all well and good when there's someone 'friendly' in the White House, but what comes next.

Well, we now know what comes next, on both sides of the Atlantic, and we can't put that genie back in its bottle. Swingeing cuts by successive Conservative governments over here, coupled with the Brexit time-and-money pit means that there's no attention or cash left.

If we stop and think about things for a second, we probably wouldn't don't want to live in a world where machines make decisions for us, based on algorithms devised by nerds. As Rose Eveleth discusses in a scathing article for Vox, this stuff isn't 'inevitable' — nor does it constitute a process of 'natural selection':

Often consumers don’t have much power of selection at all. Those who run small businesses find it nearly impossible to walk away from Facebook, Instagram, Yelp, Etsy, even Amazon. Employers often mandate that their workers use certain apps or systems like Zoom, Slack, and Google Docs. “It is only the hyper-privileged who are now saying, ‘I’m not going to give my kids this,’ or, ‘I’m not on social media,’” says Rumman Chowdhury, a data scientist at Accenture. “You actually have to be so comfortable in your privilege that you can opt out of things.”

And so we’re left with a tech world claiming to be driven by our desires when those decisions aren’t ones that most consumers feel good about. There’s a growing chasm between how everyday users feel about the technology around them and how companies decide what to make. And yet, these companies say they have our best interests in mind. We can’t go back, they say. We can’t stop the “natural evolution of technology.” But the “natural evolution of technology” was never a thing to begin with, and it’s time to question what “progress” actually means.

Rose Eveleth

I suppose the thing that concerns me the most is people in dire need being subject to impersonal technology for vital and life-saving aid.

For example, Mark Latonero, writing in The New York Times, talks about the growing dangers around what he calls 'surveillance humanitarianism':

By surveillance humanitarianism, I mean the enormous data collection systems deployed by aid organizations that inadvertently increase the vulnerability of people in urgent need.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don’t apply for people who are starving.

Mark Latonero

It's easy to think that this is an emergency, so we should just do whatever is necessary. But Latonero explains the risks, arguing that the risk is shifted to a later time:

If an individual or group’s data is compromised or leaked to a warring faction, it could result in violent retribution for those perceived to be on the wrong side of the conflict. When I spoke with officials providing medical aid to Syrian refugees in Greece, they were so concerned that the Syrian military might hack into their database that they simply treated patients without collecting any personal data. The fact that the Houthis are vying for access to civilian data only elevates the risk of collecting and storing biometrics in the first place.

Mark Latonero

There was a rather startling article in last weekend's newspaper, which I've found online. Hannah Devlin, again writing in The Guardian (which is a good source of information for those concerned with surveillance) writes about a perfect storm of social media and improved processing speeds:

[I]n the past three years, the performance of facial recognition has stepped up dramatically. Independent tests by the US National Institute of Standards and Technology (Nist) found the failure rate for finding a target picture in a database of 12m faces had dropped from 5% in 2010 to 0.1% this year.

The rapid acceleration is thanks, in part, to the goldmine of face images that have been uploaded to Instagram, Facebook, LinkedIn and captioned news articles in the past decade. At one time, scientists would create bespoke databases by laboriously photographing hundreds of volunteers at different angles, in different lighting conditions. By 2016, Microsoft had published a dataset, MS Celeb, with 10m face images of 100,000 people harvested from search engines – they included celebrities, broadcasters, business people and anyone with multiple tagged pictures that had been uploaded under a Creative Commons licence, allowing them to be used for research. The dataset was quietly deleted in June, after it emerged that it may have aided the development of software used by the Chinese state to control its Uighur population.

In parallel, hardware companies have developed a new generation of powerful processing chips, called Graphics Processing Units (GPUs), uniquely adapted to crunch through a colossal number of calculations every second. The combination of big data and GPUs paved the way for an entirely new approach to facial recognition, called deep learning, which is powering a wider AI revolution.

Hannah Devlin

Those of you who have read this far and are expecting some big reveal are going to be disappointed. I don't have any 'answers' to these problems. I guess I've been guilty, like many of us have, of the kind of 'privacy nihilism' mentioned by Ian Bogost in The Atlantic:

Online services are only accelerating the reach and impact of data-intelligence practices that stretch back decades. They have collected your personal data, with and without your permission, from employers, public records, purchases, banking activity, educational history, and hundreds more sources. They have connected it, recombined it, bought it, and sold it. Processed foods look wholesome compared to your processed data, scattered to the winds of a thousand databases. Everything you have done has been recorded, munged, and spat back at you to benefit sellers, advertisers, and the brokers who service them. It has been for a long time, and it’s not going to stop. The age of privacy nihilism is here, and it’s time to face the dark hollow of its pervasive void.

Ian Bogost

The only forces that we have to stop this are collective action, and governmental action. My concern is that we don't have the digital savvy to do the former, and there's definitely the lack of will in respect of the latter. Troubling times.

Friday fawnings

On this week's rollercoaster journey, I came across these nuggets:

  • Renata Ávila: “The Internet of creation disappeared. Now we have the Internet of surveillance and control” (CCCB Lab) — "This lawyer and activist talks with a global perspective about the movements that the power of “digital colonialism” is weaving. Her arguments are essential for preventing ourselves from being crushed by the technological world, from being carried away by the current of ephemeral divertemento. For being fully aware that, as individuals, our battle is not lost, but that we can control the use of our data, refuse to give away our facial recognition or demand that the privacy laws that protect us are obeyed."
  • Everything Is Private Equity Now (Bloomberg) — "The basic idea is a little like house flipping: Take over a company that’s relatively cheap and spruce it up to make it more attractive to other buyers so you can sell it at a profit in a few years. The target might be a struggling public company or a small private business that can be combined—or “rolled up”—with others in the same industry."
  • Forget STEM, We Need MESH (Our Human Family) — "I would suggest a renewed focus on MESH education, which stands for Media Literacy, Ethics, Sociology, and History. Because if these are not given equal attention, we could end up with incredibly bright and technically proficient people who lack all capacity for democratic citizenship."
  • Connecting the curious (Harold Jarche) — "If we want to change the world, be curious. If we want to make the world a better place, promote curiosity in all aspects of learning and work. There are still a good number of curious people of all ages working in creative spaces or building communities around common interests. We need to connect them."
  • Twitter: No, really, we're very sorry we sold your security info for a boatload of cash (The Register) — "The social networking giant on Tuesday admitted to an "error" that let advertisers have access to the private information customers had given Twitter in order to place additional security protections on their accounts."
  • Digital tools interrupt workers 14 times a day (CIO Dive) — "The constant chime of digital workplace tools including email, instant messaging or collaboration software interrupts knowledge workers 13.9 times on an average day, according to a survey of 3,750 global workers from Workfront."
  • Book review – Curriculum: Athena versus the Machine (TES) — "Despite the hope that the book is a cure for our educational malaise, Curriculum is a morbid symptom of the current political and intellectual climate in English education."
  • Fight for the planet: Building an open platform and open culture at Greenpeace (Opensource.com) — "Being as open as we can, pushing the boundaries of what it means to work openly, doesn't just impact our work. It impacts our identity."
  • Psychodata (Code Acts in Education) — "Social-emotional learning sounds like a progressive, child-centred agenda, but behind the scenes it’s primarily concerned with new forms of child measurement."

Image via xkcd

People will come to adore the technologies that undo their capacities to think

So said Neil Postman (via Jay Springett). Jay is one of a small number of people who's work I find particularly thoughtful and challenging.

Another is Venkatesh Rao, who last week referenced a Twitter thread he posted earlier this year. It's awkward to and quote the pertinent parts of such things, but I'll give it a try:

Megatrend conclusion: if you do not build a second brain or go offline, you will BECOME the second brain.

[...]

Basically, there's no way to actually handle the volume of information and news that all of us appear to be handling right now. Which means we are getting augmented cognition resources from somewhere. The default place is "social" media.

[...]

What those of us who are here are doing is making a deal with the devil (or an angel): in return for being 1-2 years ahead of curve, we play 2nd brain to a shared first brain. We've ceded control of executive attention not to evil companies, but… an emergent oracular brain.

[...]

I called it playing your part in the Global Social Computer in the Cloud (GSCITC).

[...]

Central trade-off in managing your participation in GSCITC is: The more you attempt to consciously curate your participation rather than letting it set your priorities, the less oracular power you get in return.

Venkatesh Rao

He reckons that being fully immersed in the firehose of social media is somewhat like reading the tea leaves or understanding the runes. You have to 'go with the flow'.

Rao uses the example of the very Twitter thread he's making. Constructing it that way versus, for example, writing a blog post or newsletter means he is in full-on 'gonzo mode' versus what he calls (after Henry David Thoreau) 'Waldenponding'.

I have been generally very unimpressed with the work people seem to generate when they go waldenponding to work on supposedly important things. The comparable people who stay more plugged in seem to produce better work.

My kindest reading of people who retreat so far it actually compromises their work is that it is a mental health preservation move because they can't handle the optimum GSCITC immersion for their project. Their work could be improved if they had the stomach for more gonzo-nausea.

My harshest reading is that they're narcissistic snowflakes who overvalue their work simply because they did it.

Venkatesh Rao

Well, perhaps. But as someone who has attempted to drink from that firehouse for over a decade, I think the time comes when you realise something else. Who's setting the agenda here? It's not 'no-one', but neither is it any one person in particular. Rather the whole structure of what can happen within such a network depends on decisions made other than you.

For example, Dan Hon, pointed (in a supporter-only newsletter) to an article by Louise Matsakis in WIRED that explains that the social network TikTok not only doesn't add timestamps to user-generated content, but actively blocks the clock on your smartphone. These design decisions affect what can and can't happen, and also the kinds of things that do end up happening.


Writing in The Guardian, Leah McLaren writes about being part of the last generation to really remember life before the internet.

In this age of uncertainty, predictions have lost value, but here’s an irrefutable one: quite soon, no person on earth will remember what the world was like before the internet. There will be records, of course (stored in the intangibly limitless archive of the cloud), but the actual lived experience of what it was like to think and feel and be human before the emergence of big data will be gone. When that happens, what will be lost?

Leah McLaren

McLaren is evidently a few years older than me, as I've been online since I was about 15. However, I definitely reflect on a regular basis about what being hyper-connected does to my sense of self. She cites a recent study published in the official journal of the World Psychiatric Association. Part of the conclusion of that study reads:

As digital technologies become increasingly integrated with everyday life, the Internet is becoming highly proficient at capturing our attention, while producing a global shift in how people gather information, and connect with one another. In this review, we found emerging support for several hypotheses regarding the pathways through which the Internet is influencing our brains and cognitive processes, particularly with regards to: a) the multi‐faceted stream of incoming information encouraging us to engage in attentional‐switching and “multi‐tasking” , rather than sustained focus; b) the ubiquitous and rapid access to online factual information outcompeting previous transactive systems, and potentially even internal memory processes; c) the online social world paralleling “real world” cognitive processes, and becoming meshed with our offline sociality, introducing the possibility for the special properties of social media to impact on “real life” in unforeseen ways.

Firth, J., et al. (2019). The “online brain”: how the Internet may be changing our cognition. World Psychiatry, 18: 119-129.

In her Guardian article, McLaren cites the main author, Dr Joseph Firth:

“The problem with the internet,” Firth explained, “is that our brains seem to quickly figure out it’s there – and outsource.” This would be fine if we could rely on the internet for information the same way we rely on, say, the British Library. But what happens when we subconsciously outsource a complex cognitive function to an unreliable online world manipulated by capitalist interests and agents of distortion? “What happens to children born in a world where transactive memory is no longer as widely exercised as a cognitive function?” he asked.

Leah McLaren

I think this is the problem, isn't it? I've got no issue with having an 'outboard brain' where I store things that I want to look up instead of remember. It's also insanely useful to have a method by which the world can join together in a form of 'hive mind'.

What is problematic is when this 'hive mind' (in the form of social media) is controlled by people and organisations whose interests are orthogonal to our own.

In that situation, there are three things we can do. The first is to seek out forms of nascent 'hive mind'-like spaces which are not controlled by people focused on the problematic concept of 'shareholder value'. Like Mastodon, for example, and other decentralised social networks.

The second is to spend time finding out the voices to which you want to pay particular attention. The chances are that they won't only write down their thoughts via social networks. They are likely to have newsletters, blogs, and even podcasts.

Third, an apologies for the metaphor, but with such massive information consumption the chances are that we can become 'constipated'. So if we don't want that to happen, if we don't want to go on an 'information diet', then we need to ensure a better throughput. One of the best things I've done is have a disciplined approach to writing (here on Thought Shrapnel, and elsewhere) about the things I've read and found interesting. That's one way to extract the nutrients.


I'd love your thoughts on this. Do you agree with the above? What strategies do you have in place?

Friday flexitarianism

Check these links out and tell me which one you like best:

  • The radical combination of degrowth and basic income (openDemocracy) — "One of the things you hear whenever you talk about degrowth is that, if the economy doesn't grow, people are going to be without jobs, people will go hungry, and no one wants that. Rich countries might be able to afford slowing down their economies, but not poorer ones. You hear this argument mostly in countries from the Global South, like my own. This misses the point. Degrowth is a critique of our dependency on work. This idea that people have to work to stay alive, and thus the economy needs to keep growing for the sake of keeping people working."
  • The hypersane are among us, if only we are prepared to look (Aeon) — "It is not just that the ‘sane’ are irrational but that they lack scope and range, as though they’ve grown into the prisoners of their arbitrary lives, locked up in their own dark and narrow subjectivity. Unable to take leave of their selves, they hardly look around them, barely see beauty and possibility, rarely contemplate the bigger picture – and all, ultimately, for fear of losing their selves, of breaking down, of going mad, using one form of extreme subjectivity to defend against another, as life – mysterious, magical life – slips through their fingers."
  • "The Tragedy of the Commons": how ecofascism was smuggled into mainstream thought (BoingBoing) — "We are reaching a "peak indifference" tipping point in the climate debate, where it's no longer possible to deny the reality of the climate crisis. I think that many of us assumed that when that happened, we'd see a surge of support for climate justice, the diversion of resources from wealth extraction for the super-rich to climate remediation and defense centered on the public good. But that expectation overestimated the extent to which climate denial was motivated by mere greed."
  • What Would It Take to Shut Down the Entire Internet? (Gizmodo) "One imaginative stumbling block, in playing out the implications of [this] scenario, was how something like that could happen in the first place. And so—without advocating any of the methods described below, or strongly suggesting that hundreds or thousands of like-minded heroes band together to take this sucker down once and for all—...we’ve asked a number of cybersecurity experts how exactly one would go about shutting down the entire internet."
  • Earning, spending, saving: The currency of influence in open source (Opensource.com) — "Even though you can't buy it, influence behaves like a form of virtual currency in an open source community: a scarce resource, always needed, but also always in short supply. One must earn it through contributions to an open source project or community. In contrast to monetary currency, however, influence is not transferable. You must earn it for yourself. You can neither give nor receive it as a gift."
  • The Art of Topophilia: 7 Ways to Love the Place You Live (Art of Manliness) — "It’s not only possible to kindle this kind of topophilic love affair with “sexier” places chock full of well-hyped advantages, but also with so-called undesirable communities that aren’t on the cultural radar. Just as people who may initially appear lowly and unappealing, but have warm and welcoming personalities, come to seem more attractive the more we get to know them, so too can sleepier, less vaunted locales."
  • A Like Can’t Go Anywhere, But a Compliment Can Go a Long Way (Frank Chimero) — "Passive positivity isn’t enough; active positivity is needed to counterbalance whatever sort of collective conversations and attention we point at social media. Otherwise, we are left with the skewed, inaccurate, and dangerous nature of what’s been built: an environment where most positivity is small, vague, and immobile, and negativity is large, precise, and spreadable."
  • EU recognises "right to repair" in push to make appliances last longer (Dezeen) — "Not included in the EU right to repair rules are devices such as smart phones and laptops, whose irreplaceable batteries and performance-hampering software updates are most often accused of encouraging throwaway culture."
  • I'm a Psychotherapist Who Sets 30-Day Challenges Instead of Long-Term Goals. Here's Why (Inc.) — "Studies show our brains view time according to either "now deadlines" or "someday deadlines." And "now deadlines" often fall within this calendar month."

Image by Yung-sen Wu (via The Atlantic)

Technology is the name we give to stuff that doesn't work properly yet

So said my namesake Douglas Adams. In fact, he said lots of wise things about technology, most of them too long to serve as a title.

I'm in a weird place, emotionally, at the moment, but sometimes this can be a good thing. Being taken out of your usual 'autopilot' can be a useful way to see things differently. So I'm going to take this opportunity to share three things that, to be honest, make me a bit concerned about the next few years...

Attempts to put microphones everywhere

Alexa-enabled EVERYTHING

In an article for Slate, Shannon Palus ranks all of Amazon's new products by 'creepiness'. The Echo Frames are, in her words:

A microphone that stays on your person all day and doesn’t look like anything resembling a microphone, nor follows any established social codes for wearable microphones? How is anyone around you supposed to have any idea that you are wearing a microphone?

Shannon Palus

When we're not talking about weapons of mass destruction, it's not the tech that concerns me, but the context in which the tech is used. As Palus points out, how are you going to be able to have a 'quiet word' with anyone wearing glasses ever again?

It's not just Amazon, of course. Google and Facebook are at it, too.

Full-body deepfakes

[www.youtube.com/watch](https://www.youtube.com/watch?v=8siezzLXbNo)
Scary stuff

With the exception, perhaps, of populist politicians, I don't think we're ready for a post-truth society. Check out the video above, which shows Chinese technology that allows for 'full body deepfakes'.

The video is embedded, along with a couple of others in an article for Fast Company by DJ Pangburn, who also notes that AI is learning human body movements from videos. Not only will you be able to prank your friends by showing them a convincing video of your ability to do 100 pull-ups, but the fake news it engenders will mean we can't trust anything any more.

Neuromarketing

If you clicked on the 'super-secret link' in Sunday's newsletter, you will have come across STEALING UR FEELINGS which is nothing short of incredible. As powerful as it is in showing you the kind of data that organisations have on us, it's the tip of the iceberg.

Kaveh Waddell, in an article for Axios, explains that brains are the last frontier for privacy:

"The sort of future we're looking ahead toward is a world where our neural data — which we don't even have access to — could be used" against us, says Tim Brown, a researcher at the University of Washington Center for Neurotechnology.

Kaveh Waddell

This would lead to 'neuromarketing', with advertisers knowing what triggers and influences you better than you know yourself. Also, it will no doubt be used for discriminatory purposes and, because it's coming directly from your brainwaves, short of literally wearing a tinfoil hat, there's nothing much you can do.


So there we are. Am I being too fearful here?

Friday fluctuations

Have a quick skim through these links that I came across this week and found interesting:

  • Overrated: Ludwig Wittgenstein (Standpoint) — "Wittgenstein’s reputation for genius did not depend on incomprehensibility alone. He was also “tortured”, rude and unreliable. He had an intense gaze. He spent months in cold places like Norway to isolate himself. He temporarily quit philosophy, because he believed that he had solved all its problems in his 1922 Tractatus Logico-Philosophicus, and worked as a gardener. He gave away his family fortune. And, of course, he was Austrian, as so many of the best geniuses are."
  • EdTech Resistance (Ben Williamson) ⁠— "We should not and cannot ignore these tensions and challenges. They are early signals of resistance ahead for edtech which need to be engaged with before they turn to public outrage. By paying attention to and acting on edtech resistances it may be possible to create education systems, curricula and practices that are fair and trustworthy. It is important not to allow edtech resistance to metamorphose into resistance to education itself."
  • The Guardian view on machine learning: a computer cleverer than you? (The Guardian) — "The promise of AI is that it will imbue machines with the ability to spot patterns from data, and make decisions faster and better than humans do. What happens if they make worse decisions faster? Governments need to pause and take stock of the societal repercussions of allowing machines over a few decades to replicate human skills that have been evolving for millions of years."
  • A nerdocratic oath (Scott Aaronson) — "I will never allow anyone else to make me a cog. I will never do what is stupid or horrible because “that’s what the regulations say” or “that’s what my supervisor said,” and then sleep soundly at night. I’ll never do my part for a project unless I’m satisfied that the project’s broader goals are, at worst, morally neutral. There’s no one on earth who gets to say: “I just solve technical problems. Moral implications are outside my scope”."
  • Privacy is power (Aeon) — "The power that comes about as a result of knowing personal details about someone is a very particular kind of power. Like economic power and political power, privacy power is a distinct type of power, but it also allows those who hold it the possibility of transforming it into economic, political and other kinds of power. Power over others’ privacy is the quintessential kind of power in the digital age."
  • The Symmetry and Chaos of the World's Megacities (WIRED) — "Koopmans manages to create fresh-looking images by finding unique vantage points, often by scouting his locations on Google Earth. As a rule, he tries to get as high as he can—one of his favorite tricks is talking local work crews into letting him shoot from the cockpit of a construction crane."
  • Green cities of the future - what we can expect in 2050 (RNZ) — "In their lush vision of the future, a hyperloop monorail races past in the foreground and greenery drapes the sides of skyscrapers that house communal gardens and vertical farms."
  • Wittgenstein Teaches Elementary School (Existential Comics) ⁠— "And I'll have you all know, there is no crying in predicate logic."
  • Ask Yourself These 5 Questions to Inspire a More Meaningful Career Move (Inc.) — "Introspection on the right things can lead to the life you want."

Image from Do It Yurtself

It’s not a revolution if nobody loses

Thanks to Clay Shirky for today's title. It's true, isn't it? You can't claim something to be a true revolution unless someone, some organisation, or some group of people loses.

I'm happy to say that it's the turn of some older white men to be losing right now, and particularly delighted that those who have spent decades abusing and repressing people are getting their comeuppance.

Enough has been written about Epstein and the fallout from it. You can read about comments made by Richard Stallman, founder of the Free Software Foundation, in this Washington Post article. I've only met RMS (as he's known) in person once, at the Indie Tech Summit five years ago, but it wasn't a great experience. While I'm willing to cut visionary people some slack, he mostly acted like a jerk.

RMS is a revered figure in Free Software circles and it's actually quite difficult not to agree with his stance on many political and technological matters. That being said, he deserves everything he gets though for the comments he made about child abuse, for the way he's treated women for the past few decades, and his dictator-like approach to software projects.

In an article for WIRED entitled Richard Stallman’s Exit Heralds a New Era in Tech, Noam Cohen writes that we're entering a new age. I certainly hope so.

This is a lesson we are fast learning about freedom as it promoted by the tech world. It is not about ensuring that everyone can express their views and feelings. Freedom, in this telling, is about exclusion. The freedom to drive others away. And, until recently, freedom from consequences.

After 40 years of excluding those who didn’t serve his purposes, however, Stallman finds himself excluded by his peers. Freedom.

Maybe freedom, defined in this crude, top-down way, isn’t the be-all, end-all. Creating a vibrant inclusive community, it turns out, is as important to a software project as a coding breakthrough. Or, to put it in more familiar terms—driving away women, investing your hopes in a single, unassailable leader is a critical bug. The best patch will be to start a movement that is respectful, inclusive, and democratic.

Noam Cohen

One of the things that the next leaders of the Free Software Movement will have to address is how to take practical steps to guarantee our basic freedoms in a world where Big Tech provides surveillance to ever-more-powerful governments.

Cory Doctorow is an obvious person to look to in this regard. He has a history of understanding what's going on and writing about it in ways that people understand. In an article for The Globe and Mail, Doctorow notes that a decline in trust of political systems and experts more generally isn't because people are more gullible:

40 years of rising inequality and industry consolidation have turned our truth-seeking exercises into auctions, in which lawmakers, regulators and administrators are beholden to a small cohort of increasingly wealthy people who hold their financial and career futures in their hands.

[...]

To be in a world where the truth is up for auction is to be set adrift from rationality. No one is qualified to assess all the intensely technical truths required for survival: even if you can master media literacy and sort reputable scientific journals from junk pay-for-play ones; even if you can acquire the statistical literacy to evaluate studies for rigour; even if you can acquire the expertise to evaluate claims about the safety of opioids, you can’t do it all over again for your city’s building code, the aviation-safety standards governing your next flight, the food-safety standards governing the dinner you just ordered.

Cory Doctorow

What's this got to do with technology, and in particular Free Software?

Big Tech is part of this problem... because they have monopolies, thanks to decades of buying nascent competitors and merging with their largest competitors, of cornering vertical markets and crushing rivals who won't sell. Big Tech means that one company is in charge of the social lives of 2.3 billion people; it means another company controls the way we answer every question it occurs to us to ask. It means that companies can assert the right to control which software your devices can run, who can fix them, and when they must be sent to a landfill.

These companies, with their tax evasion, labour abuses, cavalier attitudes toward our privacy and their completely ordinary human frailty and self-deception, are unfit to rule our lives. But no one is fit to be our ruler. We deserve technological self-determination, not a corporatized internet made up of five giant services each filled with screenshots from the other four.

Cory Doctorow

Doctorow suggests breaking up these companies to end their de facto monopolies and level the playing field.

The problem of tech monopolies is something that Stowe Boyd explored in a recent article entitled Are Platforms Commons? Citing previous precedents around railroads, Boyd has many questions, including whether successful platforms be bound with the legal principles of 'common carriers', and finishes with this:

However, just one more question for today: what if ecosystems were constructed so that they were governed by the participants, rather by the hypercapitalist strivings of the platform owners — such as Apple, Google, Amazon, Facebook — or the heavy-handed regulators? Is there a middle ground where the needs of the end user and those building, marketing, and shipping products and services can be balanced, and a fair share of the profits are distributed not just through common carrier laws but by the shared economics of a commons, and where the platform orchestrator gets a fair share, as well? We may need to shift our thinking from common carrier to commons carrier, in the near future.

Stowe Boyd

The trouble is, simply establishing a commons doesn't solve all of the problems. In fact, what tends to happen next is well known:

The tragedy of the commons is a situation in a shared-resource system where individual users, acting independently according to their own self-interest, behave contrary to the common good of all users, by depleting or spoiling that resource through their collective action.

Wikipedia

An article in The Economist outlines the usual remedies to the 'tragedy of the commons': either governmental regulation (e.g. airspace), or property rights (e.g. land). However, the article cites the work of Elinor Ostrom, a Nobel prizewinning economist, showing that another way is possible:

An exclusive focus on states and markets as ways to control the use of commons neglects a varied menagerie of institutions throughout history. The information age provides modern examples, for example Wikipedia, a free, user-edited encyclopedia. The digital age would not have dawned without the private rewards that flowed to successful entrepreneurs. But vast swathes of the web that might function well as commons have been left in the hands of rich, relatively unaccountable tech firms.

[...]

A world rich in healthy commons would of necessity be one full of distributed, overlapping institutions of community governance. Cultivating these would be less politically rewarding than privatisation, which allows governments to trade responsibility for cash. But empowering commoners could mend rents in the civic fabric and alleviate frustration with out-of-touch elites.

The Economist

I count myself as someone on the left of politics, if that's how we're measuring things today. However, I don't think we need representation at any higher level than is strictly necessary.

In a time when technology allows you, to a great extent, to represent yourself, perhaps we need ways of demonstrating how complex and multi-faceted some issues are? Perhaps we need to try 'liquid democracy':

Liquid democracy lies between direct and representative democracy. In direct democracy, participants must vote personally on all issues, while in representative democracy participants vote for representatives once in certain election cycles. Meanwhile, liquid democracy does not depend on representatives but rather on a weighted and transitory delegation of votes. Liquid democracy through elections can empower individuals to become sole interpreters of the interests of the nation. It allows for citizens to vote directly on policy issues, delegate their votes on one or multiple policy areas to delegates of their choosing, delegate votes to one or more people, delegated to them as a weighted voter, or get rid of their votes' delegations whenever they please.

WIkipedia

I think, given the state that politics is in right now, it's well worth a try. The problem, of course, is that the losers would be the political elites, the current incumbents. But, hey, it's not a revolution if nobody loses, right?

Saturday strikings

This week's roundup is going out a day later than usual, as yesterday was the Global Climate Strike and Thought Shrapnel was striking too!

Here's what I've been paying attention to this week:

  • How does a computer ‘see’ gender? (Pew Research Center) — "Machine learning tools can bring substantial efficiency gains to analyzing large quantities of data, which is why we used this type of system to examine thousands of image search results in our own studies. But unlike traditional computer programs – which follow a highly prescribed set of steps to reach their conclusions – these systems make their decisions in ways that are largely hidden from public view, and highly dependent on the data used to train them. As such, they can be prone to systematic biases and can fail in ways that are difficult to understand and hard to predict in advance."
  • The Communication We Share with Apes (Nautilus) — "Many primate species use gestures to communicate with others in their groups. Wild chimpanzees have been seen to use at least 66 different hand signals and movements to communicate with each other. Lifting a foot toward another chimp means “climb on me,” while stroking their mouth can mean “give me the object.” In the past, researchers have also successfully taught apes more than 100 words in sign language."
  • Why degrowth is the only responsible way forward (openDemocracy) — "If we free our imagination from the liberal idea that well-being is best measured by the amount of stuff that we consume, we may discover that a good life could also be materially light. This is the idea of voluntary sufficiency. If we manage to decide collectively and democratically what is necessary and enough for a good life, then we could have plenty."
  • 3 times when procrastination can be a good thing (Fast Company) — "It took Leonardo da Vinci years to finish painting the Mona Lisa. You could say the masterpiece was created by a master procrastinator. Sure, da Vinci wasn’t under a tight deadline, but his lengthy process demonstrates the idea that we need to work through a lot of bad ideas before we get down to the good ones."
  • Why can’t we agree on what’s true any more? (The Guardian) — "What if, instead, we accepted the claim that all reports about the world are simply framings of one kind or another, which cannot but involve political and moral ideas about what counts as important? After all, reality becomes incoherent and overwhelming unless it is simplified and narrated in some way or other.
  • A good teacher voice strikes fear into grown men (TES) — "A good teacher voice can cut glass if used with care. It can silence a class of children; it can strike fear into the hearts of grown men. A quiet, carefully placed “Excuse me”, with just the slightest emphasis on the “-se”, is more effective at stopping an argument between adults or children than any amount of reason."
  • Freeing software (John Ohno) — "The only way to set software free is to unshackle it from the needs of capital. And, capital has become so dependent upon software that an independent ecosystem of anti-capitalist software, sufficiently popular, can starve it of access to the speed and violence it needs to consume ever-doubling quantities of to survive."
  • Young People Are Going to Save Us All From Office Life (The New York Times) — "Today’s young workers have been called lazy and entitled. Could they, instead, be among the first to understand the proper role of work in life — and end up remaking work for everyone else?"
  • Global climate strikes: Don’t say you’re sorry. We need people who can take action to TAKE ACTUAL ACTION (The Guardian) — "Brenda the civil disobedience penguin gives some handy dos and don’ts for your civil disobedience"

All is petty, inconstant, and perishable

So said Marcus Aurelius. Today's short article is about what happens after you die. We're all aware of the importance of making a will, particularly if you have dependants. But that's primarily for your analogue, offline life. What about your digital life?

In a recent TechCrunch article, Jon Evans writes:

I really wish I hadn’t had cause to write this piece, but it recently came to my attention, in an especially unfortunate way, that death in the modern era can have a complex and difficult technical aftermath. You should make a will, of course. Of course you should make a will. But many wills only dictate the disposal of your assets. What will happen to the other digital aspects of your life, when you’re gone?

Jon Evans

The article points to a template for a Digital Estate Planning Document which you can use to list all of the places that you're active. Interestingly, the suggestion is to have a 'digital executor', which makes sense as the more technical you are the more likely that other members of your family might not be able to follow your instructions.

Interestingly, the Wikipedia article on digital wills has some very specific advice of which the above-mentioned document is only a part:

  1. Appoint someone as online executor
  2. State in a formal document how profiles and accounts are handled
  3. Understand privacy policies
  4. Provide online executor list of websites and logins
  5. State in the will that the online executor must have a copy of the death certificate

I hadn't really thought about this, but the chances of identity theft after someone has died are as great, if not greater, as when they were alive:

An article by Magder in the newspaper The Gazette provides a reminder that identity theft can potentially continue to be a problem even after death if their information is released to the wrong people. This is why online networks and digital executors require proof of a death certificate from a family member of the deceased person in order to acquire access to accounts. There are instances when access may still be denied, because of the prevalence of false death certificates.

Wikipedia

Zooming out a bit, and thinking about this from my own perspective, it's a good idea to insist on good security practices for your nearest and dearest. Ensure they know how to use password managers and use two-factor authentication on their accounts. If they do this for themselves, they'll understand how to do it with your accounts when you're gone.

One thing it's made think about is the length of time for which I renew domain names. I tend to just renew mine (I have quite a few) on a yearly basis. But what if the worst happened? Those payment details would be declined, and my sites would be offline in a year or less.

All of this makes me think that the important thing here is to keep things as simple as possible. As I've discussed in another article, the way people remember us after we're gone is kind of important.

Most of us could, I think, divide our online life into three buckets:

  • Really important to my legacy
  • Kind of important
  • Not important

So if, for example, I died tomorrow, the domain renewal for Thought Shrapnel lapsed next year, and a scammer took it over, that would be terrible. It's part of the reason why I still renew domains I don't use. So this would go in the 'really important to my legacy' bucket.

On the other hand, my experiments with various tools and platforms I'm less bothered about. They would probably go in the 'not important' bucket.

Then there's that awkward middle space. Things like the site for my doctoral thesis when the 'official' copy is in the Durham University e-Theses repository.

Ultimately, it's a conversation to have with those close to you. For me, it's on my mind after the death of a good friend and so something I should get to before life goes back to some version of normality. After all, figuring out someone else's digital life admin is the last thing people want when they're already dealing with grief.

Friday fermentations

I boiled the internet and this was what remained:

  • I Quit Social Media for a Year and Nothing Magical Happened (Josh C. Simmons) — "A lot of social media related aspects of my life are different now – I’m not sure they’re better, they’re just different, but I can confidently say that I prefer this normal to last year’s. There’s a bit of rain with all of the sunshine. I don’t see myself ever going back to social media. I don’t see the point of it, and after leaving for a while, and getting a good outside look, it seems like an abusive relationship – millions of workers generating data for tech-giants to crunch through and make money off of. I think that we tend to forget how we were getting along pretty well before social media – not everything was idyllic and better, but it was fine."
  • Face recognition, bad people and bad data (Benedict Evans) — "My favourite example of what can go wrong here comes from a project for recognising cancer in photos of skin. The obvious problem is that you might not have an appropriate distribution of samples of skin in different tones. But another problem that can arise is that dermatologists tend to put rulers in the photo of cancer, for scale - so if all the examples of ‘cancer’ have a ruler and all the examples of ‘not-cancer’ do not, that might be a lot more statistically prominent than those small blemishes. You inadvertently built a ruler-recogniser instead of a cancer-recogniser."
  • Would the Internet Be Healthier Without 'Like' Counts? (WIRED) ⁠— "Online, value is quantifiable. The worth of a person, idea, movement, meme, or tweet is often based on a tally of actions: likes, retweets, shares, followers, views, replies, claps, and swipes-up, among others. Each is an individual action. Together, though, they take on outsized meaning. A YouTube video with 100,000 views seems more valuable than one with 10, even though views—like nearly every form of online engagement—can be easily bought. It’s a paradoxical love affair. And it’s far from an accident."
  • Are Platforms Commons? (On The Horizon) — "[W]hat if ecosystems were constructed so that they were governed by the participants, rather by the hypercapitalist strivings of the platform owners — such as Apple, Google, Amazon, Facebook — or the heavy-handed regulators? Is there a middle ground where the needs of the end user and those building, marketing, and shipping products and services can be balanced, and a fair share of the profits are distributed not just through common carrier laws but by the shared economics of a commons, and where the platform orchestrator gets a fair share, as well?"
  • Depression and anxiety threatened to kill my career. So I came clean about it (The Guardian) — "To my surprise, far from rejecting me, students stayed after class to tell me how sorry they were. They left condolence cards in my mailbox and sent emails to let me know they were praying for my family. They stopped by my office to check on me. Up to that point, I’d been so caught up in my despair that it never occurred to me that I might be worthy of concern and support. Being accepted despite my flaws touched me in ways that are hard to express."
  • Absolute scale corrupts absolutely (apenwarr) — "Here's what we've lost sight of, in a world where everything is Internet scale: most interactions should not be Internet scale. Most instances of most programs should be restricted to a small set of obviously trusted people. All those people, in all those foreign countries, should not be invited to read Equifax's PII database in Argentina, no matter how stupid the password was. They shouldn't even be able to connect to the database. They shouldn't be able to see that it exists. It shouldn't, in short, be on the Internet."
  • The Automation Charade (Logic magazine) — "The problem is that the emphasis on technological factors alone, as though “disruptive innovation” comes from nowhere or is as natural as a cool breeze, casts an air of blameless inevitability over something that has deep roots in class conflict. The phrase “robots are taking our jobs” gives technology agency it doesn’t (yet?) possess, whereas “capitalists are making targeted investments in robots designed to weaken and replace human workers so they can get even richer” is less catchy but more accurate."
  • The ambitious plan to reinvent how websites get their names (MIT Technology Review) — "The system would be based on blockchain technology, meaning it would be software that runs on a widely distributed network of computers. In theory, it would have no single point of failure and depend on no human-run organization that could be corrupted or co-opted."
  • O whatever God or whatever ancestor that wins in the next life (The Main Event) — "And it begins to dawn on you that the stories were all myths and the epics were all narrated by the villains and the history books were written to rewrite the histories and that so much of what you thought defined excellence merely concealed grift."
  • A Famous Argument Against Free Will Has Been Debunked (The Atlantic) — "In other words, people’s subjective experience of a decision—what Libet’s study seemed to suggest was just an illusion—appeared to match the actual moment their brains showed them making a decision."

If you change nothing, nothing will change

What would you do if you knew you had 24 hours left to live? I suppose it would depend on context. Is this catastrophe going to affect everyone, or only you? I'm not sure I'd know what to do in the former case, but once I'd said my goodbyes to my family, I'm pretty sure I know what I'd do in the latter.

Yep, I would go somewhere by myself and write.

To me, the reason both reading and writing can feel so freeing is that they allow you to mentally escape your physical constraints. It almost doesn't matter what's happening to your body or anything around you while you lose yourself in someone else's words, or you create your own.


I came across an interesting blog recently. It had a single post, entitled Consume less, create more. In it, the author, 'Tom', explains that the 1,600 words he's shared were written over the course of a month after he realised that he was spending his life consuming instead of creating.

A lot of ink has been spilled about the perils of modern technology. How it distracts us, how it promotes unhealthy comparisons with others, how it makes us fat, how it limits social interaction, how it spies on us. And all of these things are probably true, to some extent.

But the real tragedy of modern technology is that it’s turned us into consumers. Our voracious consumption of media parallels our consumption of fossil fuels, corn syrup, and plastic straws. And although we’re starting to worry about our consumption of those physical goods, we seem less concerned about our consumption of information.

We treat information as necessarily good, and comfort ourselves with the feeling that whatever article or newsletter we waste our time with is actually good for us. We equate reading with self improvement, even though we forget most of what we’ve read, and what we remember isn’t useful.

TJCX

I feel that at this juncture in history, we've perfected surveillance-via-smartphone as the perfect tool to maximise FOMO. For those growing up in the goldfish bowl of the modern world, this may feel as normal as the 'water' in which they are 'swimming'. But for the rest of us, it can still feel... odd.

This is going to sound pretty amazing, but I don't think there's been many days in my adult life when I've been able to go somewhere without anyone else knowing. As a kid? Absolutely. I can vividly remember, for example, cycling to a corn field and finding a place to lie down and look at the sky, knowing that no-one could see me. It was time spent with myself, unmediated and unfiltered.

This didn't used to be unusual. People had private inner lives that were manifested in private actions. In a recent column in The Guardian, Grace Dent expanded on this.

Yes life after iPhones is marvellous, but in the 90s I ran wild across London, up to all kinds of no good, staying out for days, keeping my own counsel entirely. My parents up north would not speak to me for weeks. Sometimes, life back in the days when we had one shit Nokia and a landline between five friends seems blissful. One was permitted lost weekends and periods of secret skulduggery or just to lie about reading a paperback without the sense six people were owed a text message. Yes, things took longer, and one needed to make plans and keep them, but being off the grid was normal. Today, not replying... is a truly radical act.

Grace Dent

"Not replying... is a truly radical act". Wow. Let that sink in for a moment.


Given all this, it's no wonder in our always-on culture that we have so much 'life admin' to concern ourselves with. Previous generations may have had 'pay the bills' on their to-do list, but it wasn't nudged down the to-do list by 'inform a person I kind of know on Twitter that they have incorrect view on Brexit'.

All of these things build upon incrementally until they eventually become unsustainable. It's death by a thousand cuts. As I've quoted many times before before, Jocelyn K. Glei's question is always worth asking: who are you without the doing?


Realistically, most of our days are likely to involve some use of digital communication tools. We can't always be throwing off our shackles to live the life of a flâneur. To facilitate space to create, therefore, it's important to draw some red lines. This is what Michael Bernstein talks about in Sorry, we can't join your Slack.

Saying yes to joining client Slack channels would mean that down the line we’d feel more exhausted but less accomplished. We’d have more superficial “friends,” but wouldn’t know how to deal with products much better than we did now. We’d be on the hook all the time, and have less of an opportunity to consider our responses.

Michael Bernstein

In other words, being more available and more 'social' takes time away from more important pursuits. After all, time is the ultimate zero-sum game.


Ultimately, I guess it's about learning to see the world differently. There very well be a 'new normal' that we've begun to internalise but, for now at least, we have a choice to use to our advantage that 'flexibility' we hear so much about.

This is why self-reflection is so important, as Wanda Thibodeaux explains in an article for Inc.

In sum, elimination of stress and the acceptance of peace comes not necessarily from changing the world, but rather from clearing away all the learned clutter that prevents us from changing our view of the world. Even the biggest systemic "realities" (e.g., work "HAS" to happen from 9 a.m. to 5 p.m.) are up for reinterpretation and rewriting, and arguably, inner calm and innovation both stem from the same challenge of perceptions.

Wanda Thibodeaux

To do this, you have to have to already have decided the purpose for which you're using your tools, including the ones provided by your smartphone.

Need more specific advice on that? I suggest you go and read this really handy post by Ryan Holiday: A Radical Guide to Spending Less Time on Your Phone. The advice to be focused on which apps you need on your phone is excellent; I deleted over 100!

You may also find this post useful that I wrote over on my blog a few months ago about how changing the 'launcher' on your phone can change your life.


If you make some changes after reading this, I'd be interested in hearing how you get on. Let me know in the comments section below!


Quotation-as-title from Rajkummar Rao.

Friday feudalism

Check out these things I discovered this week, and wanted to pass along:

  • Study shows some political beliefs are just historical accidents (Ars Technica) — "Obviously, these experiments aren’t exactly like the real world, where political leaders can try to steer their parties. Still, it’s another way to show that some political beliefs aren’t inviolable principles—some are likely just the result of a historical accident reinforced by a potent form of tribal peer pressure. And in the early days of an issue, people are particularly susceptible to tribal cues as they form an opinion."
  • Please, My Digital Archive. It’s Very Sick. (Lapham's Quarterly) — "An archivist’s dream is immaculate preservation, documentation, accessibility, the chance for our shared history to speak to us once more in the present. But if the preservation of digital documents remains an unsolvable puzzle, ornery in ways that print materials often aren’t, what good will our archiving do should it become impossible to inhabit the world we attempt to preserve?"
  • So You’re 35 and All Your Friends Have Already Shed Their Human Skins (McSweeney's) — "It’s a myth that once you hit 40 you can’t slowly and agonizingly mutate from a human being into a hideous, infernal arachnid whose gluttonous shrieks are hymns to the mad vampire-goddess Maggorthulax. You have time. There’s no biological clock ticking. The parasitic worms inside you exist outside of our space-time continuum."
  • Investing in Your Ordinary Powers (Breaking Smart) — "The industrial world is set up to both encourage and coerce you to discover, as early as possible, what makes you special, double down on it, and build a distinguishable identity around it. Your specialness-based identity is in some ways your Industrial True Name. It is how the world picks you out from the crowd."
  • Browser Fingerprinting: An Introduction and the Challenges Ahead (The Tor Project) — "This technique is so rooted in mechanisms that exist since the beginning of the web that it is very complex to get rid of it. It is one thing to remove differences between users as much as possible. It is a completely different one to remove device-specific information altogether."
  • What is a Blockchain Phone? The HTC Exodus explained (giffgaff) — "HTC believes that in the future, your phone could hold your passport, driving license, wallet, and other important documents. It will only be unlockable by you which makes it more secure than paper documents."
  • Debate rages in Austria over enshrining use of cash in the constitution (EURACTIV) — "Academic and author Erich Kirchler, a specialist in economic psychology, says in Austria and Germany, citizens are aware of the dangers of an overmighty state from their World War II experience."
  • Cory Doctorow: DRM Broke Its Promise (Locus magazine) — "We gave up on owning things – property now being the exclusive purview of transhuman immortal colony organisms called corporations – and we were promised flexibility and bargains. We got price-gouging and brittle­ness."
  • Five Books That Changed Me In One Summer (Warren Ellis) — "I must have been around 14. Rayleigh Library and the Oxfam shop a few doors down the high street from it, which someone was clearly using to pay things forward and warp younger minds."

To refrain from imitation is the best revenge

Today's title comes from Marcus Aurelius' Meditations, which regular readers of my writing will know I read on repeat. George Herbert, the English poet, wrote something similar to this in "living well is the best revenge".

But what do these things actually mean in practice?


One of my favourite episodes of Frasier (the only sitcom I've ever really enjoyed) is when Niles has to confront his childhood bully. It leads to this magnificent exchange:

Frasier:
You know the expression, "Living well is the best revenge"?
Niles:
It's a wonderful expression. I just don't know how true it is. You don't see it turning up in a lot of opera plots. "Ludwig, maddened by the poisoning of his entire family, wreaks vengeance on Gunther in the third act by living well."
Frasier:
All right, Niles.
Niles:
"Whereupon Woton, upon discovering his deception, wreaks vengeance on Gunther in the third act again by living even better than the Duke."
Frasier:
Oh, all right!

In other words, it often doesn't feel that 'living well' makes any tangible difference.

But let's step back a moment. What does it mean to 'live well'? Is it the same as refraining from imitating others, or are Marcus Aurelius and George Herbert talking about two entirely different things?


During an email exchange last week, someone mentioned that they weren't sure whether my segues between topics were 'brilliant' or 'tenuous'. Well, dear reader, here's a chance to judge for yourself....


In a recent article for Fast Company, ostensibly about 'personal branding' Trip O'Dell gets awfully deep awfully quickly and starts invoking Aristotle:

Aristotle is the father of Western philosophy because he didn’t focus on likes, engagement, or followers. Aristotle focused on the nature of authenticity; what it means to be real but also persuasive. He broke the requirements for persuasiveness into four simple elements: ethos (reputation/authority), logos (logic), pathos (feeling), and kairos (timing). Those four elements are required to argue persuasively in any context. However, the stakes are higher in business. Confidently communicating who you are, what you stand for, and why you’re great at what you do is not only essential, it’s liberating.

Trip O'Dell

What I particularly like about the article is the re-focusing on 'personal ethos' rather than 'personal brand'. Branding is a form of marketing, of changing the surface appearance of something. It's about morphing a product (in this case, yourself) into something that better fits in with what other people expect.

An ethos runs much deeper. It is, as Aristotle noted, about your reputation or authority, neither of which are manufactured overnight.

The hardest part of establishing a professional ethos is describing it; it takes work, and it isn’t easy. The process requires a level of maturity and self-awareness that can be uncomfortable at times. You’re forced to ask some essential questions and make yourself vulnerable to critique and rejection. That discomfort is the tax that is paid to eliminate self-defeating habits that hold many people back in their professional lives.

Trip O'Dell

This is where that magnificent word 'authenticity' comes in. No-one really knows what it means, but everyone wants to have it. I'd argue that authenticity is a by-product of reputation and authority. Easy to destroy, difficult to build.


Let me set my stall out by saying that I think that Marcus Aurelius ("To refrain from imitation is the best revenge") and George Herbert ("Living well is the best revenge") were actually talking about much the same thing.

I don't know much about George Herbert, but Wikipedia tells me he was an orator as well as a poet, and fluent in Latin and Greek. So I'm surmising that he at least had a passing knowledge of the Stoics. The chances are he was using his poetic flair to make Marcus Aurelius' quotation a little more memorable.


Revenge can be dramatic and explosive. It can be as subtle as tiny daggers. Either way, revenge involves communicating something to another person in such a way that they realise you've got one up on them.

Malice may or may not be involved; it's probably better if it isn't. The pop diva Mariah Carey is the queen of this, claiming that she "doesn't know" people with whom she's allegedly having a feud.

But, back to the dead white dudes. In How to Think Like a Roman Emperor, Donald Robertson explains that the Stoics saw that both way we live and the way we communicate as important.

The Stoics realized that to communicate wisely, we must phrase things appropriately. Indeed, according to Epictetus, the most striking characteristic of Socrates was that he never became irritated during an argument. He was always polite and refrained from speaking harshly even when others insulted him. He patiently endured much abuse and yet was able to put an end to most quarrels in a calm and rational manner.

Donald J. Robertson

In other words, you don't need to imitate other people's anger, irritability, or lack of patience. You can 'live well' by being comfortable in your own skin and demonstrate the calm waters of your soul.

This, of course, is hard work. Nietzsche is famously quoted as saying:

He who fights too long against dragons becomes a dragon himself; and if you gaze too long into the abyss, the abyss will gaze into you.”

Friedrich Nietzsche

Feel free to substitute 'internet trolls' or 'petty-minded neighbours' for 'dragons'. The effect is the same. Marcus Aurelius is reminding us that refraining from imitating their behaviour is the best form of revenge.

Likewise, George Herbert is telling us that 'living well' is (as Trip O'Dell notes in that Fast Company article) about having a 'personal ethos'. It's about knowing who you are and where you're going. And, potentially, acting like Mariah Carey, throwing shade on your enemies by not acknowledging their existence.

Friday floutings

Did you see these things this week? I did, and thought they were aces.

  1. Do you live in a ‘soft city’? Here’s why you probably want to (Fast Company) — "The benefits of taking a layered approach to building design—and urban planning overall—is that it also cuts down on the amount of travel by car that people need to do. If resources are assembled in a way that a person leaving their home can access everything they need by walking, biking, or taking transit, it frees up space for streets to also be layered to support these different modes."
  2. YouTube should stop recommending garbage videos to users (Ars Technica) — "When a video finishes playing, YouTube should show the next video in the same channel. Or maybe it could show users a video selected from a list of high-quality videos curated by human YouTube employees. But the current approach—in which an algorithm tries to recommend the most engaging videos without worrying about whether they're any good—has got to go."
  3. Fairphone 3 is the 'ethical' smartphone you might actually buy (Engadget) — "Doing the right thing is often framed as giving up something. You're not enjoying a vegetarian burger, you're being denied the delights of red meat. But what if the ethical, moral, right choice was also the tastiest one? What if the smartphone made by the yurt-dwelling moralists was also good-looking, inexpensive and useful? That's the question the Fairphone 3 poses."
  4. Uh-oh: Silicon Valley is building a Chinese-style social credit system (Fast Company) — "The most disturbing attribute of a social credit system is not that it’s invasive, but that it’s extralegal. Crimes are punished outside the legal system, which means no presumption of innocence, no legal representation, no judge, no jury, and often no appeal. In other words, it’s an alternative legal system where the accused have fewer rights."
  5. The Adults In The Room (Deadspin) — "The tragedy of digital media isn’t that it’s run by ruthless, profiteering guys in ill-fitting suits; it’s that the people posing as the experts know less about how to make money than their employees, to whom they won’t listen."
  6. A brief introduction to learning agility (Opensource.com) — "One crucial element of adaptability is learning agility. It is the capacity for adapting to situations and applying knowledge from prior experience—even when you don't know what to do. In short, it's a willingness to learn from all your experiences and then apply that knowledge to tackle new challenges in new situations."
  7. Telegram Pushes Ahead With Plans for ‘Gram’ Cryptocurrency (The New York Times) — "In its sales pitch for the Gram, which was viewed by The New York Times, Telegram has said the new digital money will operate with a decentralized structure similar to Bitcoin, which could make it easier to skirt government regulations."
  8. Don't Teach Tools (Assorted Stuff) — "As Culatta notes, concentrating on specific products also locks teachers (and, by extension, their students) into a particular brand, to the advantage of the company, rather than helping them understand the broader concepts of using computing devices as learning and creative tools."
  9. Stoic Reflections From The Gym (part 2) by Greg Sadler (Modern Stoicism) — "From a Stoic perspective, what we do or don’t make time for, particularly in relation to other things, reflects what Epictetus would call the price we actually place upon those things, on what we take to be goods or values, evils or disvalues, and the relative rankings of those in relation to each other."

Calvin & Hobbes cartoon found via a recent post on tenpencemore

The best way out is always through

So said Robert Frost, but I want to begin with the ending of a magnificent post from Kate Bowles. She expresses clearly how I feel sometimes when I sit down to write something for Thought Shrapnel:

[T]his morning I blocked out time, cleared space, and sat down to write — and nothing happened. Nothing. Not a word, not even a wisp of an idea. After enough time staring at the blankness of the screen I couldn’t clearly remember having had an idea, ever.

Along the way I looked at the sky, I ate a mandarin and then a second mandarin, I made a cup of tea, I watched a family of wrens outside my window, I panicked. I let email divert me, and then remembered that was the opposite of the plan. I stayed off Twitter. Panic increased.

Then I did the one thing that absolutely makes a difference to me. I asked for help. I said “I write so many stupid words in my bullshit writing job that I can no longer write and that is the end of that.” And the person I reached out to said very calmly “Why not write about the thing you’re thinking about?”

Sometimes what you have to do as a writer is sit in place long enough, and sometimes you have to ask for help. Whatever works for you, is what works.

Kate Bowles

There are so many things wrong with the world right now, that sometimes I feel like I could stop working on all of the things I'm working on and spend time just pointing them out to people.

But to what end? You don't change the world by just making people aware of things, not usually. For example, as tragic as the sentence, "the Amazon is on fire" is, it isn't in and of itself a call-to-action. These days, people argue about the facts themselves as well as the appropriate response.

The world is an inordinately complicated place that we seek to make sense of by not thinking as much as humanly possible. To aid and abet us in this task, we divide ourselves, either consciously or unconsciously, into groups who apply similar heuristics. The new (information) is then assimilated into the old (worldview).

I have no privileged position, no objective viewpoint in which to observe and judge the world's actions. None of us do. I'm as complicit in joining and forming in and out groups as the next person. I decide I'm going to delete my Twitter account and then end up rage-tweeting All The Things.

Thankfully, there are smart people, and not only academics, thinking about all this to figure out what we can and should do. Tim Urban, from the phenomenally-successful Wait But Why, for example, has spent the last three years working on "a new language we can use to think and talk about our societies and the people inside of them". In the first chapter in a new series, he writes about the ongoing struggle between (what he calls) the 'Primitive Minds' and 'Higher Minds' of humans:

The never-ending struggle between these two minds is the human condition. It’s the backdrop of everything that has ever happened in the human world, and everything that happens today. It’s the story of our times because it’s the story of all human times.

Tim Urban

I think this is worth remembering when we spend time on social networks. And especially when we spend so much time that it becomes our default delivery method for the news of the day. Our Primitive Minds respond strongly to stimuli around fear and fornication.

When we reflect on our social media usage and the changing information landscape, the temptation is either to cut down, or to try a different information diet. Some people become the equivalent of Information Vegans, attempting to source the 'cleanest' morsels of information from the most wholesome, trusted, and traceable of places.

But where are those 'trusted places' these days? Are we as happy with the previously gold-standard news outlets such as the BBC and The New York Times as we once were? And if not, what's changed?

The difference, I think, is the way we've decided to allow money to flow through our digital lives. Commercial news outlets, including those with which the BBC competes, are funded by advertising. Those adverts we see in digital spaces aren't just showing things that we might happen to be interested in. They'll keep on showing you that pair of shoes you almost bought last week in every space that is funded by advertising. Which is basically everywhere.

I feel like I'm saying obvious things here that everyone knows, but perhaps it bears repeating. If everyone is consuming news via social networks, and those news stories are funded by advertising, then the nature of what counts as 'news' starts to evolve. What gets the most engagement? How are headlines formed now, compared with a decade ago?

It's as if something hot-wires our brain when something non-threatening and potentially interesting is made available to us 'for free'. We never get to the stuff that we'd like to think defines us, because we caught in neverending cycles of titillation. We pay with our attention, that scarce and valuable resource.

Our attention, and more specifically, how we react to our social media feeds when we're 'engaged' is valuable because it can be packaged up and sold to advertisers. But it's also sold to governments too. Twitter just had to update their terms and conditions specifically because of the outcry over the Chinese government's propaganda around the Hong Kong protests.

Protesters part of the 'umbrella revolution' in Hong Kong have recently been focusing on cutting down what we used to call CCTV cameras, but which are much more accurately described as 'facial recognition masts':

[youtube.com/watch](https://youtube.com/watch?v=qW18_rOUa2s)

We are living in a world where the answer to everything seems to be 'increased surveillance'. Kids not learning fast enough in school? Track them more. Scared of terrorism? Add more surveillance into the lives of everyday citizens. And on and on.

In an essay earlier this year, Maciej Cegłowski riffed on all of this, reflecting on what he calls 'ambient privacy':

Because our laws frame privacy as an individual right, we don’t have a mechanism for deciding whether we want to live in a surveillance society. Congress has remained silent on the matter, with both parties content to watch Silicon Valley make up its own rules. The large tech companies point to our willing use of their services as proof that people don’t really care about their privacy. But this is like arguing that inmates are happy to be in jail because they use the prison library. Confronted with the reality of a monitored world, people make the rational decision to make the best of it.

That is not consent.

Ambient privacy is particularly hard to protect where it extends into social and public spaces outside the reach of privacy law. If I’m subjected to facial recognition at the airport, or tagged on social media at a little league game, or my public library installs an always-on Alexa microphone, no one is violating my legal rights. But a portion of my life has been brought under the magnifying glass of software. Even if the data harvested from me is anonymized in strict conformity with the most fashionable data protection laws, I’ve lost something by the fact of being monitored.

Maciej Cegłowski

One of the difficulties in resisting the 'Silicon Valley narrative' and Big Tech's complicity with governments is the danger of coming across as a neo-luddite. Without looking very closely to understand what's going on (and having some time to reflect) it can all look like the inevitable march of progress.

So, without necessarily an answer to all this, I guess the best thing is, like Kate, to ask for help. What can we do here? What practical steps can we take? Comments are open.

Friday flinchings

Here's a distillation of the best of what I've been reading over the last three weeks:

  • The new left economics: how a network of thinkers is transforming capitalism (The Guardian) — "The new leftwing economics wants to see the redistribution of economic power, so that it is held by everyone – just as political power is held by everyone in a healthy democracy. This redistribution of power could involve employees taking ownership of part of every company; or local politicians reshaping their city’s economy to favour local, ethical businesses over large corporations; or national politicians making co-operatives a capitalist norm."
  • Dark web detectives and cannabis sommeliers: Here are some jobs that could exist in the future (CBC) — "In a report called Signs of the Times: Expert insights about employment in 2030, the Brookfield Institute for Innovation + Entrepreneurship — a policy institute set up to help Canadians navigate the innovation economy — brings together insights into the future of work gleaned from workshops held across the country."
  • Art Spiegelman: golden age superheroes were shaped by the rise of fascism (The Guardian) — "The young Jewish creators of the first superheroes conjured up mythic – almost god-like – secular saviours to deal with the threatening economic dislocations that surrounded them in the great depression and gave shape to their premonitions of impending global war. Comics allowed readers to escape into fantasy by projecting themselves on to invulnerable heroes."
  • We Have Ruined Childhood (The New York Times) — "I’ve come to believe that the problems with children’s mental and emotional health are caused not by any single change in kids’ environment but by a fundamental shift in the way we view children and child-rearing, and the way this shift has transformed our schools, our neighborhoods and our relationships to one another and our communities."
  • Turning the Nintendo Switch into Android’s best gaming hardware (Ars Technica) — "The Nintendo Switch is, basically, a game console made out of smartphone parts.... Really, the only things that make the Switch a game console are the sweet slide-on controllers and the fact that it is blessed by Nintendo, with actually good AAA games, ecosystem support, and developer outreach.
  • Actually, Gender-Neutral Pronouns Can Change a Culture (WIRED) — "Would native-speaker Swedes, seven years after getting a new pronoun plugged into their language, be more likely to assume this androgynous cartoon was a man? A woman? Either, or neither? Now that they had a word for it, a nonbinary option, would they think to use it?"
  • Don’t Blink! The Hazards of Confidence (The New York Times Magazine) — "Unfortunately, this advice is difficult to follow: overconfident professionals sincerely believe they have expertise, act as experts and look like experts. You will have to struggle to remind yourself that they may be in the grip of an illusion."
  • Why These Social Networks Failed So Badly (Gizmodo) — "It’s not to say that without Facebook, the whole internet would be more like a local farmer’s market or a punk venue or an art gallery or comedy club or a Narnia fanfic club, just that those places are harder to find these days."
  • Every productivity thought I've ever had, as concisely as possible (Alexey Guzey) — "I combed through several years of my private notes and through everything I published on productivity before and tried to summarize all of it in this post."

Header image via Jessica Hagy at Indexed

It is the child within us that trembles before death

So said Plato in his Phaedo. I've just returned from a holiday, much of which was dominated by finding out that a good friend of mine had passed away. It was a huge shock.

A few days later, author Austin Kleon sent out a newsletter noting that a few people he particularly admired had also died, and linked to a post about checking in with death. In it, he quotes advice from a pediatrician who works with patients in palliative care:

Be kind. Read more books. Spend time with your family. Crack jokes. Go to the beach. Hug your dog. Tell that special person you love them.

These are the things these kids wished they could’ve done more. The rest is details.

Oh… and eat ice-cream.

Alastair McAlpine

Despite my grandmother dying last year, I was utterly unprepared for the death of my friend. I had thought that by reading Stoic philosophy every day, and having a memento mori next to my bed, that I was somehow in tune with death. I really wasn't.

I shed many tears for the first couple of days after hearing the news. While I was devastated by the loss of a good friend, I was also affected by the questions it raised about my own mortality.

I'm thankful for the strong support network of family and friends that have helped me with the grieving process. One friend in particular has a much healthier relationship with death than me. They said that they've come to see such times in their life as a useful opportunity to re-assess whether they're on the right course.

That makes sense. I don't want to waste the rest of the time I have left.

Some have no aims at all for their life’s course, but death takes them unawares as they yawn languidly – so much so that I cannot doubt the truth of that oracular remark of the greatest of poets: ‘It is a small part of life we really live.’ Indeed, all the rest is not life but merely time.

Seneca

Some people seem to pack several lifetimes into their short time on earth. Others, not so much.

When I studied Philosophy as an undergraduate, I was always puzzled by Aristotle's mention of Solon in the Nichomachean Ethics. He thought events and actions after a person's death could affect their 'happiness'.

On reflection, I think it's a way of saying that the effect that someone has during their time on earth ⁠— for example, as a teacher — outlasts them. Their lives can be viewed in a 'happy' or 'unhappy' light based on how things turn out.

When someone close to you dies before they reach old age, we also mentally factor-in the happiness they could have experienced after they passed away. However, after the initial shock of them no longer being present comes the realisation that they (and you) wouldn't have been around forever anyway.

Back in 2017, Zan Boag, editor of New Philosopher magazine, interviewed Hilde Lindemann, Professor of Philosophy at Michigan State University. In a wide-ranging interview, she commented:

Premature death is a tragedy, but I don’t think death at the end of a normal human life span should be met with anger and indignation. We humans can only take in so much, and in due season it will be time for us all to leave

Hilde Lindemann

As a husband and father, perhaps the hardest teaching from the Stoic philosophers around death comes from Epictetus in his Enchiridion. He expresses a similar thought in several different ways, but here is one formulation:

If you wish your children, and your wife, and your friends to live for ever, you are stupid; for you wish to be in control of things which you cannot, you wish for things that belong to others to be your own... Exercise, therefore, what is in your control.

Epictetus

There are some things that are in my control, and some things that are not. Epictetus' teachings can be reduced to the simple point that we should be concerned with those things which are under our control.

Marcus Aurelius, whose Meditations we should remember were designed as a form of practical philosophical journal, also mentioned death a lot.

Do not act as if you had ten thousand years to throw away. Death stands at your elbow. Be good for something while you live and it is in your power.

Marcus Aurelius

I think the best thing to take from the experience of losing someone close to us other is to begin a life worth living right now. Not putting off for the future right action and virtuous living, but practising them immediately.

It's certainly been a wake-up call for me. I'll be reading even more books, giving my family more hugs, and standing up for the things in which I believe. Starting now.

Friday fizzles

I head off on holiday tomorrow! Before I go, check out these highlights from this week's reading and research:

  • “Things that were considered worthless are redeemed” (Ira David Socol) — "Empathy plus Making must be what education right now is about. We are at both a point of learning crisis and a point of moral crisis. We see today what happens — in the US, in the UK, in Brasil — when empathy is lost — and it is a frightening sight. We see today what happens — in graduates from our schools who do not know how to navigate their world — when the learning in our schools is irrelevant in content and/or delivery."
  • Voice assistants are going to make our work lives better—and noisier (Quartz) — "Active noise cancellation and AI-powered sound settings could help to tackle these issues head on (or ear on). As the AI in noise cancellation headphones becomes better and better, we’ll potentially be able to enhance additional layers of desirable audio, while blocking out sounds that distract. Audio will adapt contextually, and we’ll be empowered to fully manage and control our soundscapes.
  • We Aren’t Here to Learn What We Already Know (LA Review of Books) — "A good question, in short, is an honest question, one that, like good theory, dances on the edge of what is knowable, what it is possible to speculate on, what is available to our immediate grasp of what we are reading, or what it is possible to say. A good question, that is, like good theory, might be quite unlovely to read, particularly in its earliest iterations. And sometimes it fails or has to be abandoned."
  • The runner who makes elaborate artwork with his feet and a map (The Guardian) — "The tracking process is high-tech, but the whole thing starts with just a pen and paper. “When I was a kid everyone thought I’d be an artist when I grew up – I was always drawing things,” he said. He was a particular fan of the Etch-a-Sketch, which has something in common with his current work: both require creating images in an unbroken line."
  • What I Do When it Feels Like My Work Isn’t Good Enough (James Clear) — "Release the desire to define yourself as good or bad. Release the attachment to any individual outcome. If you haven't reached a particular point yet, there is no need to judge yourself because of it. You can't make time go faster and you can't change the number of repetitions you have put in before today. The only thing you can control is the next repetition."
  • Online porn and our kids: It’s time for an uncomfortable conversation (The Irish Times) — "Now when we talk about sex, we need to talk about porn, respect, consent, sexuality, body image and boundaries. We don’t need to terrify them into believing watching porn will ruin their lives, destroy their relationships and warp their libidos, maybe, but we do need to talk about it."
  • Drones will fly for days with new photovoltaic engine (Tech Xplore) — "[T]his finding builds on work... published in 2011, which found that the key to boosting solar cell efficiency was not by absorbing more photons (light) but emitting them. By adding a highly reflective mirror on the back of a photovoltaic cell, they broke efficiency records at the time and have continued to do so with subsequent research.
  • Twitter won’t ruin the world. But constraining democracy would (The Guardian) — "The problems of Twitter mobs and fake news are real. As are the issues raised by populism and anti-migrant hostility. But neither in technology nor in society will we solve any problem by beginning with the thought: “Oh no, we put power into the hands of people.” Retweeting won’t ruin the world. Constraining democracy may well do.
  • The Encryption Debate Is Over - Dead At The Hands Of Facebook (Forbes) — "Facebook’s model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once."
  • Living in surplus (Seth Godin) — "When you live in surplus, you can choose to produce because of generosity and wonder, not because you’re drowning."

Image from Dilbert. Shared to make the (hopefully self-evident) counterpoint that not everything of value has an economic value. There's more to life than accumulation.

The best place to be is somewhere else?

So said Albarran Cabrera, except I added a cheeky question mark.

I have a theory. Not a grand, unifying theory of everything, but a theory nonetheless. I reckon that, despite common wisdom attributing the decline of comments on blogs to social media, it's at least also because of something else.

Here's an obvious point: there's more people online now than there were ten years ago. As a result, there's more stuff being produced and shared and, because of that, there's more to miss out on. This is known as the Fear Of Missing Out (or FOMO).

While I don't think anyone realistically thinks it's possible to keep up with everything produced online every day, I think people do have an expectation that they can keep up with what their online friends are doing and thinking. As the number of people we're following in different places grows and grows, we don't have much time to share meaningfully. Hence the rise of the retweet button.

Back in 2006, in the mists of internet time, Kathy Sierra wrote a great post entitled The myth of "keeping up". Remember that this was before people were really using social networks such as Twitter. She talks about what we're experiencing as 'information anxiety' and has some tips to combat it, which I think are still relevant:

  • Find the best aggregators
  • Get summaries
  • Cut the redundancy!
  • Unsubscribe to as many things as possible
  • Recognise that gossip and celebrity entertainment are black holes
  • Pick the categories you want for a balanced perspective, and include some from OUTSIDE your main field of interest
  • Be a LOT more realistic about what you're likely to get to, and throw the rest out.
  • In any thing you need to learn, find a person who can tell you what is:
    • Need to know
    • Should know
    • Nice to know
    • Edge case, only if it applies to you specifically
    • Useless

The interesting thing is that, done well, social media can actually be a massive force for good. It used to be set up for that, coming on the back of RSS. Now, it's set up to drag you into arguments about politics and the kind of "black holes" of gossip and celebrity entertainment that Kathy mentions.

One of the problems is that we have a cult of 'busy' which people mis-attribute to a Protestant work ethic instead of rapacious late-stage capitalism. I've recently finished 24/7: Late Capitalism and the Ends of Sleep by Jonathan Crary where he makes this startlingly obvious, but nevertheless profound point:

Because one’s bank account and one’s friendships can now be managed through identical machinic operations and gestures, there is a growing homogenization of what used to be entirely unrelated areas of experience.

Jonathan Crary

...and:

[S]ince no moment, place, or situation now exists in which one can not shop, consume, or exploit networked resources, there is a relentless incursion of the non-time of 24/7 into every aspect of social or personal life.

Jonathan Crary

In other words, you're busy because of your smartphone, the apps you decide to install upon it, and the notifications that you then receive.

The solution to FOMO is to know who you are, what you care about, and the difference you're trying to make in the world. As Gandhi famously said:

Happiness is when what you think, what you say, and what you do are in harmony.

Mahatma Gandhi

I've recently fallen into the trap of replying to work emails on my days off. It's a slippery slope, as it sets up an expectation.

via xkcd

The same goes with social media, of course, except that it's even more insidious, as an 'action' can just be liking or retweeting. It leads to slacktivism instead of making actual, meaningful change in the world.

People joke about life admin but one of those life admin tasks might be to write down (yes! with a pen and paper!) the things you're trying to achieve with the 'free' apps that you've got installed. If you were being thorough, or teaching kids how to do this, perhaps you'd:

  1. List all of the perceived benefits
  2. List all of the perceived drawbacks
  3. List all of the ways that the people making the free app can make money

Tim Ferriss recently reposted an interview he did with Seth Godin back in 2016 about how he (Seth) manages his life. It's an object lesson in focus, and leading an intentional life without overly-quantifying it. I can't help but think it's all about focus. Oh, and he doesn't use social media, other than auto-posting from his blog to Twitter.

For me, at least, because I spend so much time surrounded by technology, the decisions I make about tech are decisions I make about life. A couple of months ago I wrote a post entitled Change your launcher, change your life where I explained that even just changing how you access apps can make a material difference to your life.

So, to come full circle, the best place to be is actually where you are right now, not somewhere else. If you're fully present in the situation (Tim Ferriss suggests taking three breaths), then ask yourself some hard questions about what success looks like for you, and perhaps whether what you say, what you think, and what you do are in harmony.

Friday fidgetings

These things popped into my consciousness this week:

  • Soon, satellites will be able to watch you everywhere all the time (MIT Technology Review) — "Some of the most radical developments in Earth observation involve not traditional photography but rather radar sensing and hyperspectral images, which capture electromagnetic wavelengths outside the visible spectrum. Clouds can hide the ground in visible light, but satellites can penetrate them using synthetic aperture radar, which emits a signal that bounces off the sensed object and back to the satellite. It can determine the height of an object down to a millimeter."
  • The lesson from the ruins of Notre Dame: don’t rely on billionaires (The Guardian) — "They have banked the publicity, while dreaming up small print that didn’t exist in the spring. As another charity executive, Célia Vérot, said: “It’s a voluntary donation, so the companies are waiting for the government’s vision to see what precisely they want to fund.” It’s as if the vast project of rebuilding a 12th-century masterpiece was a breakfast buffet from which one could pick and choose."
  • Does It Stick? (Hapgood) — "But you see something that I often have trouble explaining to others — with the right habits you find students start sounding like entirely different people. They start being, in some ways, very different people. Less reactive, more reflective, more curious. If the habits stick, rather than decay, that effect can cumulative, because the students have done that most powerful of things — they have learned how to learn. And the impact of that can change a person’s life."
  • The Last Days of John Allen Chau (Outside) — "In the fall of 2018, the 26-year-old American missionary traveled to a remote speck of sand and jungle in the Indian Ocean, attempting to convert one of the planet's last uncontacted tribes to Christianity. The islanders killed him, and Chau was pilloried around the world as a deluded Christian supremacist who deserved to die. Alex Perry pieces together the life and death of a young adventurer driven to extremes by unshakable faith."
  • Human magnetism (Aeon) — "Even Charles Darwin added his two cents on these topics, claiming that ‘some part of the brain is specialised for the function of direction’. If such a mechanism did exist in our ancestors, could it have been muted – phased out with the advancement of consciousness and communication, the onset of civilisation, the invention of artificial means such as the compass and, ultimately, technologies such as GPS?"
  • How can we help the hikikomori to leave their rooms? (Aeon) — "If these anxieties are keeping people inside their homes, what’s prompting them to retreat there in the first place? One answer could be school phobia. The survey revealed that hikikomori are more likely to have dropped out of education. The transition from high school to college appeared especially harsh."
  • 3-day weekends could make people happier and more productive (Business Insider) — "There might not be an immediate change in productivity with the introduction of a four-day workweek, but with less time to kill at work, employees may procrastinate less (though there would always be those who try to take advantage)."
  • Does the Mystery of Stonehenge Involve Pig Fat? (Atlas Obscura) — "New research says the megaliths may have been dragged to the site with the help of lard."
  • In praise of the things that cost nothing (The Guardian) — "There is plenty to enjoy that is free in a world where it seems everything has a cost."

Image via Poorly Drawn Lines

Neoliberalism in any guise is not the solution but the problem

Today's quotation-as-title is from Nancy Fraser, whose short book The Old Is Dying and the New Cannot Be Born in turn gets its title from a quotation from Antonio Gramsci.

It's an excellent book; quick to read, straight to the point, and it helped me to understand some of what is going on at the moment in both US and world politics.

First, let's explain terms, as it is a book that presupposes some knowledge of political philosophy. 'Neoliberalism' isn't an easy term to define, as its meaning has mutated over time, and it's usually used in a derogatory way.

There's a whole history of the term at Wikipedia, but I'll use definitions from Investopedia and The Guardian:

Neoliberalism is a policy model—bridging politics, social studies, and economics—that seeks to transfer control of economic factors to the private sector from the public sector. It tends towards free-market capitalism and away from government spending, regulation, and public ownership.

Investopedia

In short, “neoliberalism” is not simply a name for pro-market policies, or for the compromises with finance capitalism made by failing social democratic parties. It is a name for a premise that, quietly, has come to regulate all we practise and believe: that competition is the only legitimate organising principle for human activity.

Guardian

To me, it's the reason why humans go out of their way to engineer situations where people and organisations are pitted against each other to compete for 'awards', no matter how made-up or paid-for they may be. It's a way of framing society, human interactions, and reducing everything to $$$.

In that vein, the most recent issue of New Philosopher, features an essay by Warwick Smith where he uses the thought experiment of an AI 'paperclip maximiser'. This runs amok and turns the entire universe into paperclips:

I recently heard Daniel Schmachtenberger taking this thought experiment in a very interesting direction by saying that human society is already the paperclip maximiser but instead of making paperclips we're making dollars — which are primarily just zeroes and ones in bank databases. Our collective intelligence system has on overriding purpose: to turn everything into money — trees, labour, water... everything. It is also very good at learning how to learn and is extremely good at eliminating any threats.

Warwick Smith

This attempt to turn everything into money is basically the neoliberal project. What Nancy Fraser does is identify two different strains of neoliberalism, which she explains through the lenses of 'distribution' and 'recognition':

  • Reactionary neoliberalism — moving public goods into private hands, within an exclusionary vision of a racist, patriarchal, and homophobic society.
  • Progressive neoliberalism — moving public goods into private hands, while using the banner of 'diversity' to assimilate equality and meritocracy.

The difference between these two strands of neoliberalism, then, comes in the way that they recognise people. Note that the method of distribution remains the same:

The political universe that Trump upended was highly restrictive. It was built around the opposition between two versions of neoliberalism, distinguished chiefly on an axis of recognition. Granted, one could choose between multiculturalism and ethnonationalism. But one was stuck, either way, with financialization and deindustrialization. With the menu limited to progressive and reactionary neoliberalism, there was no force to oppose the decimation of working-class and middle-class standards of living. Antineoliberal projects were severely marginalized, if not simply excluded from the public sphere.

Nancy Fraser

It's as if the Overton Window of acceptable public political discourse served up a menu of only different flavours of neoliberalism:

Ideologies are oriented within a narrative that spans the past, present, and future. We can argue over visions of what education should look like within a society, for example, because we're interested in how the next generation will turn out.

In Present Shock, Douglas Rushkoff explains that instead of shackling themselves to ideologies, Trump and other populist politicians take advantage of the 24/7 'always on' media landscape to provide a constant knee-jerk presentism:

A presentist mediascape may prevent the construction of false and misleading narratives by elites who mean us no good, but it also tends to leave everyone looking for direction and responding or overresponding to every bump in the road.

Douglas Rushkoff

What we're witnessing is essentially the end of politics as we know it, says Rushkoff:

As a result, what used to be called statecraft devolves into a constant struggle with crisis management. Leaders cannot get on top of issues, much less ahead of them, as they instead seek merely to respond to the emerging chaos in a way that makes them look authoritative.

[...]

If we have no destination toward we are progressing, then the only thing that motivates our movement is to get away from something threatening. We move from problem to problem, avoiding calamity as best we can, our worldview increasingly characterized by a sense of panic.

[...]

Blatant shock is the only surefire strategy for gaining viewers in the now.

Douglas Rushkoff

We might be witnessing the end of progressive neoliberalism, but it's not as if that's being replaced by anything different, anything better.

What, then, can we expect in the near term? Absent a secure hegemony, we face an unstable interregnum and the continuation of the political crisis. In this situation, the words of Gramsci ring true: "The old is dying and the new cannot be born; in this interregnum a great variety of morbid symptoms appear."

Nancy Fraser

No matter what the question is, neoliberalism is never the answer. The trouble, I think, is that two-dimensional diagrams of political options are far too simplistic:

Political compass, via Wikimedia Commons

For example, as Edurne Scott Loinaz shows, even within the Libertarian Left (the 'lower left') there are many different positions:

Lower left cultural differences within the zone of solidarity (Edurne Scott Loinaz)

The Libertarian Left has perhaps the best to offer in terms of fighting neoliberalism and populists like Trump. The problem is unity, and use of language:

When binary language is used within the lower left it does untold violence to our communities and makes solidarity impossible: if one can switch between binary language to speak truth about capitalists and authoritarians, and switch to dimensional language within the zone of solidarity with fellow lower leftists, it will be easier to nurture solidarity within the lower left.

Edurne Scott Loinaz

For the first time in my life, I'm actually somewhat fearful of what comes next, politically speaking. Are we going to end up with populists entrenching the authoritarian right, going back full circle to reactionary neoliberalism? Or does this current crisis mean that something new can emerge?


Header image by Guillaume Paumier used under a Creative Commons license

Friday federations

These things piqued my interest this week:

  • You Should Own Your Favorite Books in Hard Copy (Lifehacker) — "Most importantly, when you keep physical books around, the people who live with you can browse and try them out too."
  • How Creative Commons drives collaboration (Vox) "Although traditional copyright protects creators from others redistributing or repurposing their works entirely, it also restricts access, for both viewers and makers."
  • Key Facilitation Skills: Distinguishing Weird from Seductive (Grassroots Economic Organizing) — "As a facilitation trainer the past 15 years, I've collected plenty of data about which lessons have been the most challenging for students to digest."
  • Why Being Bored Is Good (The Walrus) — "Boredom, especially the species of it that I am going to label “neoliberal,” depends for its force on the workings of an attention economy in which we are mostly willing participants."
  • 5: People having fun on the internet (Near Future Field Notes) — "The internet is still a really great place to explore. But you have to get back into Internet Nature instead of spending all your time in Internet Times Square wondering how everything got so loud and dehumanising."
  • The work of a sleepwalking artist offers a glimpse into the fertile slumbering brain (Aeon) "Lee Hadwin has been scribbling in his sleep since early childhood. By the time he was a teen, he was creating elaborate, accomplished drawings and paintings that he had no memory of making – a process that continues today. Even stranger perhaps is that, when he is awake, he has very little interest in or skill for art."
  • The Power of One Push-Up (The Atlantic) — "Essentially, these quick metrics serve as surrogates that correlate with all kinds of factors that determine a person’s overall health—which can otherwise be totally impractical, invasive, and expensive to measure directly. If we had to choose a single, simple, universal number to define health, any of these functional metrics might be a better contender than BMI."
  • How Wechat censors images in private chats (BoingBoing) — "Wechat maintains a massive index of the MD5 hashes of every image that Chinese censors have prohibited. When a user sends another user an image that matches one of these hashes, it's recognized and blocked at the server before it is transmitted to the recipient, with neither the recipient or the sender being informed that the censorship has taken place."
  • It's Never Too Late to Be Successful and Happy (Invincible Career) — "The “race” we are running is a one-person event. The most important comparison is to yourself. Are you doing better than you were last year? Are you a better person than you were yesterday? Are you learning and growing? Are you slowly figuring out what you really want, what makes you happy, and what fulfillment means for you?"
  • 'Blitzscaling' Is Choking Innovation—and Wasting Money (WIRED) — "If we learned anything from the dotcom bubble at the turn of the century, it’s that in an environment of abundant capital, money does not necessarily bestow competitive advantage. In fact, spending too much, to soon on unproven business models only heightens the risk that a company's race for global domination can become a race to oblivion."

Image: Federation Square by Julien used under a Creative Commons license

The greatest obstacle to discovery is not ignorance—it is the illusion of knowledge

So said Daniel J. Boorstin. It's been an interesting week for those, like me, who follow the development of interaction between humans and machines. Specifically, people seem shocked that voice assistants are being used for health questions, also that the companies who make them employ people to listen to samples of voice recordings to make them better.

Before diving into that, let's just zoom out a bit and remind ourselves that the average level of digital literacies in the general population is pretty poor. Sometimes I wonder how on earth VC-backed companies manage to burn through so much cash. Then I remember the contortions that those who design visual interfaces go through so that people don't have to think.

Discussing 'fake news' and our information literacy problem in Forbes, you can almost feel Kalev Leetaru's eye-roll when he says:

It is the accepted truth of Silicon Valley that every problem has a technological solution.

Most importantly, in the eyes of the Valley, every problem can be solved exclusively through technology without requiring society to do anything on its own. A few algorithmic tweaks, a few extra lines of code and all the world’s problems can be simply coded out of existence.

Kalev Leetaru

It's somewhat tangential to the point I want to make in this article, but Cory Doctorow makes a a good point in this regard about fake news for Locus

Fake news is an instrument for measuring trauma, and the epistemological incoherence that trauma creates – the justifiable mistrust of the establishment that has nearly murdered our planet and that insists that making the richest among us much, much richer will benefit everyone, eventually.

Cory Doctorow

Before continuing, I'd just like to say that I've got some skin in the voice assistant game, given that our home has no fewer that six devices that use the Google Assistant (ten if you count smartphones and tablets).

Voice assistants are pretty amazing when you know exactly what you want and can form a coherent query. It's essentially just clicking the top link on a Google search result, without any of the effort of pointing and clicking. "Hey Google, do I need an umbrella today?"

However, some people are suspicious of voice assistants to a degree that borders on the superstitious. There's perhaps some valid reasons if you know your tech, but if you're of the opinion that your voice assistant is 'always recording' and literally sending everything to Amazon, Google, Apple, and/or Donald Trump then we need to have words. Just think about that for a moment, realise how ridiculous it is, and move on.

This week an article by VRT NWS stoked fears like these. It was cleverly written so that those who read it quickly could easily draw the conclusion that Google is listening to everything you say. However, let me carve out the key paragraphs:

Why is Google storing these recordings and why does it have employees listening to them? They are not interested in what you are saying, but the way you are saying it. Google’s computer system consists of smart, self-learning algorithms. And in order to understand the subtle differences and characteristics of the Dutch language, it still needs to learn a lot.

[...]

Speech recognition automatically generates a script of the recordings. Employees then have to double check to describe the excerpt as accurately as possible: is it a woman’s voice, a man’s voice or a child? What do they say? They write out every cough and every audible comma. These descriptions are constantly improving Google’s search engines, which results in better reactions to commands. One of our sources explains how this works.

VRS NWS

Every other provider of speech recognition products does this. Obviously. How else would you manage to improve voice recognition in real-world situations? What VRS NWS did was to get a sub-contractor to break a Non-Disclosure Agreement (and violate GDPR) to share recordings.

Google responded on their blog The Keyword, saying:

As part of our work to develop speech technology for more languages, we partner with language experts around the world who understand the nuances and accents of a specific language. These language experts review and transcribe a small set of queries to help us better understand those languages. This is a critical part of the process of building speech technology, and is necessary to creating products like the Google Assistant.

We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again.

We apply a wide range of safeguards to protect user privacy throughout the entire review process. Language experts only review around 0.2 percent of all audio snippets. Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google.

The Keyword

As I've said before, due to the GDPR actually having teeth (British Airways was fined £183m last week) I'm a lot happier to share my data with large companies than I was before the legislation came in. That's the whole point.

The other big voice assistant story, in the UK at least, was that the National Health Service (NHS) is partnering with Amazon Alexa to offer health advice. The BBC reports:

From this week, the voice-assisted technology is automatically searching the official NHS website when UK users ask for health-related advice.

The government in England said it could reduce demand on the NHS.

Privacy campaigners have raised data protection concerns but Amazon say all information will be kept confidential.

The partnership was first announced last year and now talks are under way with other companies, including Microsoft, to set up similar arrangements.

Previously the device provided health information based on a variety of popular responses.

The use of voice search is on the increase and is seen as particularly beneficial to vulnerable patients, such as elderly people and those with visual impairment, who may struggle to access the internet through more traditional means.

The BBC

So long as this is available to all types of voice assistants, this is great news. The number of people I know, including family members, who have convinced themselves they've got serious problems by spending ages searching their symptoms, is quite frightening. Getting sensible, prosaic advice is much better.

Iliana Magra writes in the The New York Times that privacy campaigners are concerned about Amazon setting up a health care division, but that there are tangible benefits to certain sections of the population.

The British health secretary, Matt Hancock, said Alexa could help reduce strain on doctors and pharmacists. “We want to empower every patient to take better control of their health care,” he said in a statement, “and technology like this is a great example of how people can access reliable, world-leading N.H.S. advice from the comfort of their home.”

His department added that voice-assistant advice would be particularly useful for “the elderly, blind and those who cannot access the internet through traditional means.”

Iliana Magra

I'm not dismissing the privacy issues, of course not. But what I've found, especially recently, is that the knowledge, skills, and expertise required to be truly 'Google-free' (or the equivalent) is an order of magnitude greater than what is realistically possible for the general population.

It might be fatalistic to ask the following question, but I'll do it anyway: who exactly do we expect to be building these things? Mozilla, one of the world's largest tech non-profits is conspicuously absent in these conversations, and somehow I don't think people aren't going to trust governments to get involved.

For years, techies have talked about 'personal data vaults' where you could share information in a granular way without being tracked. Currently being trialled is the BBC box to potentially help with some of this:

With a secure Databox at its heart, BBC Box offers something very unusual and potentially important: it is a physical device in the person’s home onto which personal data is gathered from a range of sources, although of course (and as mentioned above) it is only collected with the participants explicit permission, and processed under the person’s control.

Personal data is stored locally on the box’s hardware and once there, it can be processed and added to by other programmes running on the box - much like apps on a smartphone. The results of this processing might, for example be a profile of the sort of TV programmes someone might like or the sort of theatre they would enjoy. This is stored locally on the box - unless the person explicitly chooses to share it. No third party, not even the BBC itself, can access any data in ‘the box’ unless it is authorised by the person using it, offering a secure alternative to existing services which rely on bringing large quantities of personal data together in one place - with limited control by the person using it.

The BBC

It's an interesting concept and, if they can get the user experience right, a potentially groundbreaking concept. Eventually, of course, it will be in your smartphone, which means that device really will be a 'digital self'.

You can absolutely opt-out of whatever you want. For example, I opt out of Facebook's products (including WhatsApp and Instagram). You can point out to others the reasons for that, but at some point you have to realise it's an opinion, a lifestyle choice, an ideology. Not everyone wants to be a tech vegan, or live their lives under those who act as though they are one.

Friday ferretings

These things jumped out at me this week:

  • Deepfakes will influence the 2020 election—and our economy, and our prison system (Quartz) ⁠— “The problem doesn’t stop at the elections, however. Deepfakes can alter the very fabric of our economic and legal systems. Recently, we saw a deepfake video of Facebook CEO Mark Zuckerberg bragging about abusing data collected from users circulated on the internet. The creators of this video said it was produced to demonstrate the power of manipulation and had no malicious intent—yet it revealed how deceptively realistic deepfakes can be.”
  • The Slackification of the American Home (The Atlantic) — “Despite these tools’ utility in home life, it’s work where most people first become comfortable with them. 'The membrane that divides work and family life is more porous than it’s ever been before,' says Bruce Feiler, a dad and the author of The Secrets of Happy Families. 'So it makes total sense that these systems built for team building, problem solving, productivity, and communication that were invented in the workplace are migrating to the family space'.”
  • You probably don’t know what your coworkers think of you. Here’s how to change that (Fast Company) — “[T]he higher you rise in an organization, the less likely you are to get an accurate picture of how other people view you. Most people want to be viewed favorably by others in a position of power. Once you move up to a supervisory role (or even higher), it is difficult to get people to give you a straight answer about their concerns.”
  • Sharing, Generosity and Gratitude (Cable Green, Creative Commons) — “David is home recovering and growing his liver back to full size. I will be at the Mayo Clinic through the end of July. After the Mayo surgeons skillfully transplanted ⅔ of David’s liver into me, he and I laughed about organ remixes, if he should receive attribution, and wished we’d have asked for a CC tattoo on my new liver.”
  • Flexibility as a key benefit of open (The Ed Techie) — “As I chatted to Dames and Lords and fiddled with my tie, I reflected on that what is needed for many of these future employment scenarios is flexibility. This comes in various forms, and people often talk about personalisation but it is more about institutional and opportunity flexibility that is important.”
  • Abolish Eton: Labour groups aim to strip elite schools of privileges (The Guardian) — “Private schools are anachronistic engines of privilege that simply have no place in the 21st century,” said Lewis. “We cannot claim to have an education system that is socially just when children in private schools continue to have 300% more spent on their education than children in state schools.”
  • I Can't Stop Winning! (Pinboard blog) - “A one-person business is an exercise in long-term anxiety management, so I would say if you are already an anxious person, go ahead and start a business. You're not going to feel any worse. You've already got the main skill set of staying up and worrying, so you might as well make some money.”
  • How To Be The Remote Employee That Proves The Stereotypes Aren’t True (Trello blog) — “I am a big fan of over-communicating in general, and I truly believe that this is a rule all remote employees should swear by.”
  • I Used Google Ads for Social Engineering. It Worked. (The New York Times) — “Ad campaigns that manipulate searchers’ behavior are frighteningly easy for anyone to run.”
  • Road-tripping with the Amazon Nomads (The Verge) — “To stock Amazon’s shelves, merchants travel the backroads of America in search of rare soap and coveted toys.”

Image from Guillermo Acuña fronts his remote Chilean retreat with large wooden staircase (Dezeen)

Do not impose one's own standard on the work of others. Mutual moderation and cooperation will proffer better results.

I think I must have come across the above saying from Hsing Yun via Mayel de Borniol. It captures some of what I want to discuss in this article which centres around decision-making within organisations.

Let's start with a great article from Roman Imankulov from Doist. He looks to the Internet Engineering Task Force (IETF)'s approach, as enshrined in a document from 2014, explaining their 'rough consensus' approach:

Rough consensus isn’t majority rule. It’s okay to go ahead with a solution that may not look like the best choice for everyone or even the majority. "Not the best choice" means that you believe there is a better way to solve the problem, but you accept that this one will work too. That type of feedback should be welcomed, but it shouldn’t be allowed to slow down a decision.

Roman Imankulov

If they try hard enough, everyone can come up with a reason why an idea or approach won't work. My experience is that many middle-aged white men see it as their sworn duty to come up with as many of those reasons as possible 🙄

What the IETF calls 'rough consensus' I think I'd probably call 'alignment'. You don't all have to agree that a proposal is without problems, but those problems should be surmountable. Within CoTech, a network of co-operatives to which We Are Open belongs, we use Loomio. It has a number of decision tools, including the 'proposal':

Example of a 'proposal' from Loomio's documentation

As you can see, there's the ability for anyone to 'Block' a proposal, meaning that it can't be passed in its current form. People can 'Abstain' if there's a conflict of interest, or if they don't feel like they've got enough experience or expertise. Note that it's entirely possible for someone to 'Disagree' and the motion to still go ahead.

What I like about Loomio is a tool is that it focuses on decision-making. It's not about endless discussion and debate, but about having a bias towards action. You can separate the planning process from the implementation stage:

Rough consensus doesn’t mean that we don’t aim for perfection in the actual implementation of the solution. When implementing, we should always aim for technical excellence. Commitment to the implementation is often what makes a solution the right one. (This is similar to Amazon’s "disagree and commitment" philosophy.)

Roman Imankulov

I can't, by my nature, stand hierarchy. Unfortunately, it's the default operating system of most organisations, and despite our best efforts, we haven't got a one-size-fits-all alternative to it. I think this is partly because nobody has to teach you how hierarchy works.

Over the weekend, while we were walking in the Lake District, Tom Broughton and I were discussing sociocracy:

Sociocracy, also known as dynamic governance, is a system of governance which seeks to achieve solutions that create harmonious social environments as well as productive organizations and businesses. It is distinguished by the use of consent rather than majority voting in decision-making, and decision-making after discussion by people who know each other.

Wikipedia

Tom's a Quaker and so used to consent-based decision-making. I explained that we'd asked Outlandish (a CoTech member) to run a sociocratic design sprint to kick off our work around MoodleNet. It was based on the Google design sprint approach, but — as Kayleigh from Outlandish points out — featured an important twist:

We decided to remove the ‘decider’ role that a Google Sprint employs. We weren’t comfortable with the responsibility and authority of decisions sitting with one person, and having spent a few years practising sociocracy already, it just wouldn’t have felt right.

[...]

Martin, Moodle’s CEO and founder joined us for the duration of the sprint. While Martin naturally had the most expertise in the domain, the most ‘skin in the game’ and the had done the most background thinking sociocracy meant that he still needed to convince the rest of the sprint team as to why his ideas were best, and take on board other suggestions and compromises. We feel that it led to better outputs at each stage of the design sprint.

Kayleigh Walsh

It was the first time I'd seen a CEO give up their hierarchical power in the interests of ensuring that we designed something that could be the best it could possibly be. In fact, that week last May is probably one of the highlights of my career to date.

Diagram of how Sociocracy works
Diagram via Sociocracy for All

That was one week into which was poured a lot of time, attention, and money. But what if you want to practise something like sociocracy on a day-to-day basis? You have to think about structure of organisations, as there's no such thing as 'structureless' group:

Any group of people of whatever nature that comes together for any length of time for any purpose will inevitably structure itself in some fashion. The structure may be flexible; it may vary over time; it may evenly or unevenly distribute tasks, power and resources over the members of the group. But it will be formed regardless of the abilities, personalities, or intentions of the people involved. The very fact that we are individuals, with different talents, predispositions, and backgrounds makes this inevitable. Only if we refused to relate or interact on any basis whatsoever could we approximate structurelessness -- and that is not the nature of a human group.

Jo Freeman

It's only within the last year that I've discovered left-libertarianism as a coherent political and social philosophy that helps me reconcile two things that I've previously found difficult. On the one hand, I believe in a small state. On the other, I believe we have a duty to one another and should help out wherever possible.

Left-libertarianism, also known as left-wing libertarianism, names several related yet distinct approaches to political and social theory which stress both individual freedom and social equality. In its classical usage, left-libertarianism is a synonym for anti-authoritarian varieties of left-wing politics such as libertarian socialism which includes anarchism and libertarian Marxism among others.

[...]

While maintaining full respect for personal property, left-libertarians are skeptical of or fully against private ownership of natural resources, arguing in contrast to right-libertarians that neither claiming nor mixing one's labor with natural resources is enough to generate full private property rights and maintain that natural resources (raw land, oil, gold, the electromagnetic spectrum, air-space and so on) should be held in an egalitarian manner, either unowned or owned collectively. Those left-libertarians who support private property do so under occupation and use property norms or under the condition that recompense is offered to the local or even global community.

Wikipedia

In other words, you don't have to be a Marxist, communist, or anarchist to be a left-libertarian. It means you can start from a basis of personal autonomy, but end with an egalitarian approach to the world where resources (especially natural resources) are collectively owned.

To me, this is the position from which we should start when we think about decision-making within organisations. First of all, we should ask: who owns the organisation? Why? Second, we should consider how the organisation should be structured. Ten layers of management might be bad, but so is a completely flat structure for 700 people. And finally, we should think about appropriate mechanisms for decision-making.

The usual criticisms of sociocracy and other consent-based decision-making systems is that they are too slow, that they don't work in practice. In my experience, by participating in the Outlandish/Moodle design sprint, witnessing a Mozilla Festival session in which participants quickly got up-to-speed on sociocracy, and through CoTech gatherings (both online and offline), I'd say sociocracy is a viable solution.

The best decisions aren't ones where you have all of the information to hand. That's impossible. The best decisions are based on trust and consent.

As I get older, I'm realising that the best way we can improve the world is to improve its governance. It's not that we haven't got extremely talented people in the world, it's that we don't always know how to make good decision. I'd like to change that.

Friday frustrations

I couldn't help but notice these things this week:

  • Don’t ask forgiveness, radiate intent (Elizabeth Ayer) ⁠— "I certainly don’t need a reputation as being underhanded or an organizational problem. Especially as a repeat behavior, signalling builds me a track record of openness and predictability, even as I take risks or push boundaries."
  • When will we have flying cars? Maybe sooner than you think. (MIT Technology Review) — "An automated air traffic management system in constant communication with every flying car could route them to prevent collisions, with human operators on the ground ready to take over by remote control in an emergency. Still, existing laws and public fears mean there’ll probably have to be pilots at least for a while, even if only as a backup to an autonomous system."
  • For Smart Animals, Octopuses Are Very Weird (The Atlantic) — "Unencumbered by a shell, cephalopods became flexible in both body and mind... They could move faster, expand into new habitats, insinuate their arms into crevices in search of prey."
  • Cannabidiol in Anxiety and Sleep: A Large Case Series. (PubMed) — "The final sample consisted of 72 adults presenting with primary concerns of anxiety (n = 47) or poor sleep (n = 25). Anxiety scores decreased within the first month in 57 patients (79.2%) and remained decreased during the study duration. Sleep scores improved within the first month in 48 patients (66.7%) but fluctuated over time. In this chart review, CBD was well tolerated in all but 3 patients."
  • 22 Lessons I'm Still Learning at 82 (Coach George Raveling) — "We must always fill ourselves with more questions than answers. You should never retire your mind. After you retire mentally, then you are just taking up residence in society. I do not ever just want to be a resident of society. I want to be a contributor to our communities."
  • How Boris Johnson's "model bus hobby" non sequitur manipulated the public discourse and his search results (BoingBoing) — "Remember, any time a politician deliberately acts like an idiot in public, there's a good chance that they're doing it deliberately, and even if they're not, public idiocy can be very useful indeed."
  • It’s not that we’ve failed to rein in Facebook and Google. We’ve not even tried. (The Guardian) — "Surveillance capitalism is not the same as digital technology. It is an economic logic that has hijacked the digital for its own purposes. The logic of surveillance capitalism begins with unilaterally claiming private human experience as free raw material for production and sales."
  • Choose Boring Technology (Dan McKinley) — "The nice thing about boringness (so constrained) is that the capabilities of these things are well understood. But more importantly, their failure modes are well understood."
  • What makes a good excuse? A Cambridge philosopher may have the answer (University of Cambridge) — "Intentions are plans for action. To say that your intention was morally adequate is to say that your plan for action was morally sound. So when you make an excuse, you plead that your plan for action was morally fine – it’s just that something went awry in putting it into practice."
  • Your Focus Is Priceless. Stop Giving It Away. (Forge) — "To virtually everyone who isn’t you, your focus is a commodity. It is being amassed, collected, repackaged and sold en masse. This makes your attention extremely valuable in aggregate. Collectively, audiences are worth a whole lot. But individually, your attention and my attention don’t mean anything to the eyeball aggregators. It’s a drop in their growing ocean. It’s essentially nothing."

Image via @EffinBirds

Aren’t you ashamed to reserve for yourself only the remnants of your life and to dedicate to wisdom only that time can’t be directed to business?

Once you remove the specific details from the lives of the ancients, their lives were remarkably like ours. Take today's title, for example, which is a quotation from Seneca. He knew what it was like to be so busy doing 'productive' things to the exclusion of almost everything else.

My good friend Laura Hilliger wears her heart on her sleeve, and is the most no-nonsense person I know. By observing the way she lives and works, I'm learning to set limits and say exactly what I think:

Alright. I give up. #protip - If you are unable to be productive, forcing yourself to try and be productive is making you even more unproductive. Read a book or something instead.

The thing is that western society, implicitly at least, assumes that people are 'fixed' in terms of their personality and likes. But that's just the way that we choose to see ourselves:

Diagram showing The Socialised Mind, The Self-Authoring Mind, and the Self-Transforming Mind

I feel that the biggest thing that constrains us is our view of how we think other people see us. That perceived expectation becomes internalised, creating a 'psychic prison' which becomes an extremely limited playground. For better or for worse, we perform the role of how we think other people have come to see us.

One way many people find to avoid responsibility for their life choices is to play the 'busy' card. They're too busy to make good decisions, to look after their mental and physical health, to ensure that they're doing your best work.

The trouble is, that's simply not true. We've got more free time than our parents and grandparents:

Chart taken from The Atlantic

As the above chart demonstrates, it's not true that we actually work more hours. Instead, I think, it's that we're so concerned about how other people see us that we spend time doing things that feel like work but are mostly to do with presentation of self. Hence the amount of time spent on social networks like Instagram trying to create the highlights reel of our lives to show others.

One way of viewing this is that we've collectively internalised capitalism. The logic of the market has become as invisible to us as an ideology as water is to fish. In fact, some people say it's easier to imagine the end of the world than the end of capitalism!

How to know when you've internalised capitalism
- you determine your worth based on your productivity
- you feel guilty for resting
- your primary concern is to make yourself profitable
- you neglect your health
- you think 'hard work' is what brings happiness

Of course, it's become something of a cliché in our pseudo-enlightened times to talk of capitalism as the meta-problem behind everything. But that doesn't make it any less true.

Probably one of the biggest unacknowledged impacts of capitalism on our life is the artificial scarcity of time.
<p>Without capitalism, we could all work less. We could rest more. We could let selfcare, play and creation come intuitively. A lot of things don’t need to be scheduled.
We could just let time happen without any obligation to make a particular use of it." class=“wp-image-3968”/></a></figure></p>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>When we act as if we're in a rush, things aren't properly scrutinised. Yesterday's news (and opinions, and facts) don't matter. It's all about today. Our politicians have no shame, and ethics are entirely subjective. </p>
<!-- /wp:paragraph -->
<!-- wp:image {
Existential Comics - Marx on Business Ethics (1)
Existential Comics - Marx on Business Ethics (2)
Existentialist Comics

Our identity is mediated by the market, by what we produce instead of who we are. I keep coming back to a fantastic episode of Jocelyn K. Glei's Hurry Slowly podcast entitled Who Are You Without The Doing? in which she explains that we should learn to 'sit with ourselves', learning that change comes from within:

You have to completely conquer the feeling that there is something fundamentally wrong with your human nature, and that therefore you need discipline to correct your behavior. As long as you feel the discipline comes from the outside, there is still a feeling that something is lacking in you.

Jocelyn K. Glei

Derek Sivers uses the metaphor of 'doors' to explain where he finds value and wants to spend time doing. Some doors he opens and it helps him grow as a person and fosters positive relationships.

But one door is really no fun to open. I’m horrified at all the shouting, the second I open it. It’s an infinite dark room filled with psychologically tortured people, trying to get attention. Strangers screaming at strangers, starting fights. Businesses set up shop there, showing who’s said and done bad things today, because they make money when people get mad.

Derek Sivers

We keep wringing our hands about people's behaviour online, but it's that way for a reason. Hate is profitable for social networks:

Massive platforms like Facebook, Twitter, and YouTube “optimize for engagement,” and make automatic, algorithmic suggestions for every bit of content or action. From “you might also like” to “recommended just for you” to prioritizing things — anything — that will get you to click, comment, or share.

[...]

They know what will catch your attention. They know what will get you “engaged.” They know what will be more likely to lead you deeper into a rabbit hole, and what will make it harder to climb back out. Is it a literal, iron-clad trap? No. But the slippery, spiral path that leads people to the darkest corners of the internet is not an accident.

[...]

Hate is profitable. Conflict is profitable. Schadenfreude and shame are profitable. While we smugly point fingers, tsk-tsk, and think we’re being clever as we strategically dole out likes and shares, we forget that we are all just gruel-fed hamsters running on wheels deep inside giant, hyper-engineered, artificially intelligent, fully gamified, corporate-controlled virtual worlds that we absurdly think belong to us.

Ryan Ozawa

This all comes back to the time equation. Because we feel like we don't have enough time to curate things ourselves, we outsource that to others. That ends up with handing our information environments over to others to manipulate and control. It's curate or be curated.

Nobody cares about how much money you earn. Nobody cares how productive you are. Not really.

Also, without sounding harsh, nobody else cares how productive you are. Of course, productivity is important for important things, and “getting stuff done” or whatever, but it doesn’t define you in any way. What does is things like your sense of humour, where your passions lie, how you comfort a friend who’s upset, and that weird noise you make when the delivery guy calls you to say he’s outside with your food.

Leila Mitwally

The trouble is that we don't want to have this conversation, because it questions our identity, and everything we've been working for over our careers and throughout our lives:

But we don’t want to hear that because accepting this truth means asking a lot of complicated questions about our society, in which work is glorified as the pinnacle of self-expression, and personal earnings are viewed as a measure of merit and esteem.

Instead, we would instead read about buy into the idea that success in our work life is a merely a matter of being more productive. If you just follow the ‘right’ set of algorithms or rules, you too can achieve ‘success’ in your work life, along with fame and recognition and a fat bank account.

Richard Whittall

So, to finish, let me revisit a link I shared recently from Jason Hickel. We can choose to live differently, to recognise the abundance of time and resources we have in the world. To slow down, to take stock, and reject economic growth as in any way a useful indicator of human flourishing:

It doesn’t have to be this way. We can call a halt to the madness – throw a wrench in the juggernaut. By de-enclosing social goods and restoring the commons, we can ensure that people are able to access the things that they need to live a good life without having to generate piles of income in order to do so, and without feeding the never-ending growth machine. “Private riches” may shrink, as Lauderdale pointed out, but public wealth will increase.

Jason Hickel

It doesn't have to be difficult. We can just, as Dan Lyons mentions in his book Lab Rats, decide to work on things that 'close the gap' or 'increase the gap'. What that means to you, in your context, is a different matter.

Friday feeds

These things caught my eye this week:

  • Some of your talents and skills can cause burnout. Here’s how to identify them (Fast Company) — "You didn’t mess up somewhere along the way or miss an important lesson that the rest of us received. We’re all dealing with gifts that drain our energy, but up until now, it hasn’t been a topic of conversation. We aren’t discussing how we end up overusing our gifts and feeling depleted over time."
  • Learning from surveillance capitalism (Code Acts in Education) — "Terms such as ‘behavioural surplus’, ‘prediction products’, ‘behavioural futures markets’, and ‘instrumentarian power’ provide a useful critical language for decoding what surveillance capitalism is, what it does, and at what cost."
  • Facebook, Libra, and the Long Game (Stratechery) — "Certainly Facebook’s audacity and ambition should not be underestimated, and the company’s network is the biggest reason to believe Libra will work; Facebook’s brand is the biggest reason to believe it will not."
  • The Pixar Theory (Jon Negroni) — "Every Pixar movie is connected. I explain how, and possibly why."
  • Mario Royale (Kottke.org) — "Mario Royale (now renamed DMCA Royale to skirt around Nintendo’s intellectual property rights) is a battle royale game based on Super Mario Bros in which you compete against 74 other players to finish four levels in the top three. "
  • Your Professional Decline Is Coming (Much) Sooner Than You Think (The Atlantic) — "In The Happiness Curve: Why Life Gets Better After 50, Jonathan Rauch, a Brookings Institution scholar and an Atlantic contributing editor, reviews the strong evidence suggesting that the happiness of most adults declines through their 30s and 40s, then bottoms out in their early 50s."
  • What Happens When Your Kids Develop Their Own Gaming Taste (Kotaku) — "It’s rewarding too, though, to see your kids forging their own path. I feel the same way when I watch my stepson dominate a round of Fortnite as I probably would if he were amazing at rugby: slightly baffled, but nonetheless proud."
  • Whence the value of open? (Half an Hour) — "We will find, over time and as a society, that just as there is a sweet spot for connectivity, there is a sweet spot for openness. And that point where be where the default for openness meets the push-back from people on the basis of other values such as autonomy, diversity and interactivity. And where, exactly, this sweet spot is, needs to be defined by the community, and achieved as a consensus."
  • How to Be Resilient in the Face of Harsh Criticism (HBR) — "Here are four steps you can try the next time harsh feedback catches you off-guard. I’ve organized them into an easy-to-remember acronym — CURE — to help you put these lessons in practice even when you’re under stress."
  • Fans Are Better Than Tech at Organizing Information Online (WIRED) — "Tagging systems are a way of imposing order on the real world, and the world doesn't just stop moving and changing once you've got your nice categories set up."

Header image via Dilbert

Ensuring the sustainability of Thought Shrapnel

Over the last couple of months, after coming back from a hiatus over Lent, I've really poured my free time into Thought Shrapnel. My hope was that, by providing daily content, there would be a corresponding uptick in the number of people willing to become a supporter.

In fact, the opposite has happened, with almost 10% of supporters ending their backing of Thought Shrapnel over the past few weeks. Obviously, I'm doing something wrong here.

After some research and comparison with other creators, I think I've figured out what's gone wrong:

Most people do not want more email. So if the only thing you have to offer them is, ‘Hey, subscribe to this newsletter and you’ll get some more email,’ that’s not that compelling. But if you can create a different value proposition where you can say, ‘Look, I’m creating the kind of writing that you can’t find anywhere else and I need you to be a part of this and to support this work if you value it,’ then I think that people get into that. And they want to get it four times a week, but it’s not necessarily the idea of getting it four times a week that is going to be the motivating factor.

Judd Legum

Nobody asked me to send them more email. Not one of the supporters asked for 'exclusive access' to articles a week before everyone else. I just assumed.

With Thought Shrapnel, it's not the money that drives me. After hosting costs, etc. I give away most of what I receive to support other creators and worthy causes. Rather, it's the exchange of energy that's important to me. Committing to even $1/month is different to just hitting 'like' or 'retweet'.

So, going forward, I'm going to try a different approach. For everything I publish:

  • Comments are on
  • Three different types of post each week
  • Everyone gets access at the same time

On Mondays I'll publish an article-style post. On Wednesdays I'll publish a post answering any questions that have come in, or a microcast. And then on Fridays I'll publish a round-up post of interesting links.

I'm still aiming to share 30 links per week. The weekly newsletter will still be a digest of what's gone on the open web. I just hope that trying things this way will both be more sustainable.

So, I have a couple of questions:

  1. Do you have any questions for me to answer in tomorrow's post?
  2. Would you consider becoming a supporter of Thought Shrapnel?

Thanks in advance!

Ensuring the sustainability of Thought Shrapnel

Over the last couple of months, after coming back from a hiatus over Lent, I've really poured my free time into Thought Shrapnel. My hope was that, by providing daily content, there would be a corresponding uptick in the number of people willing to become a supporter.

In fact, the opposite has happened, with almost 10% of supporters ending their backing of Thought Shrapnel over the past few weeks. Obviously, I'm doing something wrong here.

After some research and comparison with other creators, I think I've figured out what's gone wrong:

Most people do not want more email. So if the only thing you have to offer them is, ‘Hey, subscribe to this newsletter and you’ll get some more email,’ that’s not that compelling. But if you can create a different value proposition where you can say, ‘Look, I’m creating the kind of writing that you can’t find anywhere else and I need you to be a part of this and to support this work if you value it,’ then I think that people get into that. And they want to get it four times a week, but it’s not necessarily the idea of getting it four times a week that is going to be the motivating factor.

Judd Legum

Nobody asked me to send them more email. Not one of the supporters asked for 'exclusive access' to articles a week before everyone else. I just assumed.

With Thought Shrapnel, it's not the money that drives me. After hosting costs, etc. I give away most of what I receive to support other creators and worthy causes. Rather, it's the exchange of energy that's important to me. Committing to even $1/month is different to just hitting 'like' or 'retweet'.

So, going forward, I'm going to try a different approach. For everything I publish:

  • Comments are on
  • Three different types of post each week
  • Everyone gets access at the same time

On Mondays I'll publish an article-style post. On Wednesdays I'll publish a post answering any questions that have come in, or a microcast. And then on Fridays I'll publish a round-up post of interesting links.

I'm still aiming to share 30 links per week. The weekly newsletter will still be a digest of what's gone on the open web. I just hope that trying things this way will both be more sustainable.

So, I have a couple of questions:

  1. Do you have any questions for me to answer in tomorrow's post?
  2. Would you consider becoming a supporter of Thought Shrapnel?

Thanks in advance!

Our nature is such that the common duties of human relationships occupy a great part of the course of our life

Michel de Montaigne, one of my favourite writers, had a very good friend, a 'soulmate' in the form of Étienne de la Boétie. He seems to have been quite the character, and an early influence for anarchist thought, before dying of the plague in 1563 at the age of 32.

His main work is translated into English as The Politics of Obedience: The Discourse of Voluntary Servitude where he suggests that the reason we get tyrants and other oppressors is because we, the people, allow them to have power over us. It all seems very relevant to our times, despite being written around 450 years ago!

We live in a time of what Patrick Stokes in New Philosopher calls 'false media balance'. It's worth quoting at length, I think:

The problem is that very often the controversy in question is over whether there even is a controversy to begin with. Some people think the world is flat: does that mean the shape of the world is a controversial topic? If you think the mere fact of disagreement means there’s a controversy there, then pretty much any topic you care to mention will turn out to be controversial if you look hard enough. But in a more substantial sense, there’s no real controversy here at all. The scientific journals aren’t full of heated arguments over the shape of the planet. The university geography departments aren’t divided into warring camps of flattists and spherists. There is no serious flat-earth research program in the geology literature.

So far, so obvious. But think about certain other scientific ‘controversies’ where competing arguments do get media time, such as climate change, or the safety and efficacy of vaccination. On the one side you have the overwhelming weight of expert opinion; on the other side amateur, bad-faith pseudoscience. In the substantial sense there aren’t even ‘two sides’ here after all.

Yet that’s not what we see; we just see two talking heads, offering competing views. The very fact both ‘heads’ were invited to speak suggests someone, somewhere has decided they are worth listening to. In other words, the very format implicitly drags every viewpoint to the same level and treats them as serious candidates for being true. That’s fine, you might reply: sapere aude! Smart and savvy viewers will see the bad arguments or shoddy claims for what they are, right? Except there’s some evidence that precisely the opposite happens. The message that actually sticks with viewers is not “the bad or pseudoscientific arguments are nonsense”, but rather that “there’s a real controversy here”.

There’s a name for this levelling phenomenon: false balance. The naïve view of balance versus bias contains no room for ‘true’ versus ‘false’ balance. Introducing a truth-value means we are not simply talking about neutrality anymore – which, as we’ve seen, nobody can or should achieve fully anyway. False balance occurs when we let in views that haven’t earned their place, or treat non-credible views as deserving the same seat at the table.

To avoid false balance, the media needs to make important and context-sensitive discriminations about what is a credible voice and what isn’t. They need balance as a verb, rather than a noun. To balance is an act, one that requires ongoing effort and constant readjustment. The risk, after all, is falling – perhaps right off the edge of the world.

Patrick Stokes

For many people, we receive a good proportion of our news via social networks. This means that, instead of being filtered by the mainstream media (who are doing a pretty bad job), the news it's filtered by all of us, who are extremely partisan. We share things that validate our political, economic, moral, and social beliefs, and rail against those who state the opposite.

While we can wring our hands about the free speech aspect of this, it's important to note the point that's being made by the xkcd cartoon that accompanies today's article: we don't have to listen to other people if we don't want to.

In a great post from 2015, Audrey Watters explains how she uses some auto-blocking apps to make her continued existence on Twitter tolerable. Again, it's worth quoting at length:

I currently block around 3800 accounts on Twitter.

By using these automated blocking tools – particularly blocking accounts with few followers – I know that I’ve blocked a few folks in error. Teachers new to Twitter are probably the most obvious example. Of course, if someone feels as though I’ve accidentally blocked them, they can still contact me through other means. (And sometimes they do. And sometimes I unblock.)

But I’m not going to give up this little bit of safety and sanity I’ve found thanks to these collaborative blocking tools for fear of upsetting a handful of people who have mistakenly ended up being blocked by me. I’m sorry. I’m just not.

And I’m not in the least bit worried that, by blocking accounts, I’m somehow trapping myself in a “filter bubble.” I don’t need to be exposed to harassment and violence to know that harassment and violence are rampant. I don’t need to be exposed to racism and misogyny to know that racism and misogyny exist. I see that shit, I live that shit already daily, whether I block accounts on social media or not.

My blocking trolls doesn’t damage civic discourse; indeed, it helps me be able to be a part of it. Despite all the talk about the Internet and democratization of ideas and voices, the architecture of many of the technologies we use is designed to amplify certain ideas and voices and silence others, protect certain voices, expose others to violence. My blocking trolls doesn’t silence anybody. But it does help me have the stamina to maintain my voice.

People need not feel bad about blocking, worry that it's impolitic or impolite. It’s already hard work to be online. Often, it’s emotional work. (And it’s work we do for free, I might add.) People – particularly people of color, women, marginalized groups – shouldn’t have to take on the extra work of dealing with abusers and harassers and trolls. Block. Block. Block. Save your energy for other battles, ones that you choose to engage in.

Audrey Watters

Blocking on the individual level is one thing, but what about whole instances running social networking software blocking other instances with which they're technically interoperable?

There's some really interesting conversations happening on the Fediverse at the moment. A 'free speech' social network called Gab, which was was forced to shut down as a centralised service will be soon relaunching as a fork of Mastodon.

In practice, this means that Gab can't easily be easily shut down, and there's many people on Mastodon, Pleroma, Misskey, and other social networks that make up the Fediverse, who are concerned about that. Those who have found a home on the Fediverse are disproportionately likely to have met with trolling, bullying, and abuse on centralised services such as Twitter.

Any service like Gab that's technically compatible with popular Fediverse services such as Mastodon can, by default, piggyback on the latter's existing ecosystem of apps. Some of these apps have decided to fight back. For example Tusky has taken a stand, as can be seen by this update from its main developer:

Before I go off to celebrate Midsummer by being in bed sick (Swedish woes), I want to share a small update.

Tusky will keep blocking servers which actively promote fascism. This in particular means Gab.

We will get our next release out just in time for the 4th of July.

Don't even try to debate us about Free Speech. This is our speech, exercising #ANTIFA views. And we will keep doing it

We will post a bigger update at a later time about what this all really means.

@Tusky@mastodon.social

Some may wonder why, exactly, there's such a problem here. After all, can't individual users do what Audrey Watters is doing with Twitter, and block people on the individual level — either automatically, or manually?

The problem is that, due to practices such as sealioning, certain communities 'sniff blood' and then pile on:

Sealioning (also spelled sea-lioning and sea lioning) is a type of trolling or harassment which consists of pursuing people with persistent requests for evidence or repeated questions, while maintaining a pretense of civility. It may take the form of "incessant, bad-faith invitations to engage in debate".

Wikipedia

So it feels like we're entering a time with the balkanisation of the internet because of geo-politics (the so-called Splinternet), but also a retreat into online social interactions that are more... bounded.

It's going to be interesting to see where the next 18 months takes us, I think. I can definitely see a decline in centralised social networks, especially among certain demographics. If I'm correct, and these people end up on federated social networks, then it's up to those of already there to set not only the technical standards, but the moral standards, too.


Also check out:

  • The secret rules of the internet (The Verge) — "The moderators of these platforms — perched uneasily at the intersection of corporate profits, social responsibility, and human rights — have a powerful impact on free speech, government dissent, the shaping of social norms, user safety, and the meaning of privacy. What flagged content should be removed? Who decides what stays and why? What constitutes newsworthiness? Threat? Harm? When should law enforcement be involved?"
  • The New Wilderness (Idle Words) — "Ambient privacy is not a property of people, or of their data, but of the world around us. Just like you can’t drop out of the oil economy by refusing to drive a car, you can’t opt out of the surveillance economy by forswearing technology (and for many people, that choice is not an option). While there may be worthy reasons to take your life off the grid, the infrastructure will go up around you whether you use it or not."
  • IQ rates are dropping in many developed countries and that doesn't bode well for humanity (Think) — "Details vary from study to study and from place to place given the available data. IQ shortfalls in Norway and Denmark appear in longstanding tests of military conscripts, whereas information about France is based on a smaller sample and a different test. But the broad pattern has become clearer: Beginning around the turn of the 21st century, many of the most economically advanced nations began experiencing some kind of decline in IQ."

Header image via xkcd

Friday fancies

These are some things I came across this week that made me smile:

  • The fake French minister in a silicone mask who stole millions (BBC News) — "For two years from late 2015, an individual or individuals impersonating France's defence minister, Jean-Yves Le Drian, scammed an estimated €80m (£70m; $90m) from wealthy victims including the Aga Khan and the owner of Château Margaux wines."
  • No, You Don’t Really Look Like That (The Atlantic) — "The global economy is wired up to your face. And it is willing to move heaven and Earth to let you see what you want to see."
  • Can You Unwrinkle A Raisin? (FiveThirtyEight) — "Back when you couldn’t just go buy a bottle of wine, folks would, instead, buy a giant brick of raisins, soak them in water to rehydrate the dried-out fruit and then store that juice in a dark cupboard for 60 days."
  • What Ecstasy Does to Octopuses (The Atlantic) — "At first they used too high a dose, and the animals “freaked out and did all these color changes”... But once the team found a more suitable dose, the animals behaved more calmly—and more sociably."
  • The English Word That Hasn’t Changed in Sound or Meaning in 8,000 Years (Nautilus) — "The word lox was one of the clues that eventually led linguists to discover who the Proto-Indo-Europeans were, and where they lived. "

Image via webcomic.name

The world is all variation and dissimilarity

Another quotation-as-title from Michel de Montaigne. I'm using it today, as I want to write a composite post based on a tweet I put out yesterday where I simply asked What shall I write about?

Note: today's update is a little different as it's immediately available on the open web, instead of being limited to supporters for seven days. It's an experiment!

Here's some responses I got to my question:

  1. Tips for aspiring Mountain Leaders (@CraigTaylor74)
  2. Decentralised learning (@plaao)
  3. Slippers and sandals (@boyledsweetie)
  4. Carbon footprint of blockchain-based credentials (@ConcentricSky)
  5. How educators can promote their good practices without looking like they're bragging (@pullel)
  6. Why the last episode of Game of Thrones was so very bad (@MikeySwales)

Never let it be said that I don't give the people what they want! Five short sections, based on the serious (and not-so-serious) answers I go from my Twitter followers.

1. Tips for aspiring Mountain Leaders

Well, I'm not even on the course yet (two more Quality Mountain Days to go!) but some tips I'd pass on are:

  • Be flexible with your planned route, especially in respect to the weather
  • Don't buy super-expensive gear until you actually need it
  • Write down your learning experiences the same day as you experience them
  • Go walking with different people (although not with anyone who's got their ML, if you want it to count towards your QMDs!)
  • Do buy walking poles and gaiters, even if you feel a prat using them

...and, of course, subscribe to The Bushcraft Padawan!

2. Decentralised learning

Decentralisation is an interesting concept, mainly because it's such an abstract concept for people to grasp. Usually, when people talk about decentralisation, they're either talking about politics or technology. Both, ultimately, are to do with power.

When it comes to learning, therefore, decentralised learning is all about empowering learners, which is often precisely the opposite of what we do in schools. We centralise instruction, and subject young people (and their teachers) to bells that control their time.

To my mind, decentralised learning is any attempt to empower learners to be more independent. That might involve them co-creating the curriculum, it might have something to do with the way we credential and/or recognise their learning. The important thing is that learning isn't something that's done to them.

3. Slippers and sandals

I'm wearing slippers right now, as I do when I'm in the house or working in my home office. I don't think you can go past Totes Isotoner, to be honest. Comfy!

Given I live in the North East of England, my opportunities to wear sandals are restricted to holidays and a few days in summer. I had a fantastic pair of Timberland sandals back in the day, but my wife finally threw them away because they were too smelly. I'm making do now with some other ones I found in the sale on Amazon, but they're actually slightly too big for me, which is annoying.

4. Carbon footprint of blockchain-based credentials

I'll start with the Bitcoin Energy Consumption Index, which gives us a couple of great charts to show the scale of the problem of using blockchains based on a proof-of-work algorithm:

That's right, the whole of the Czech Republic could be powered by the amount of energy required to run the Bitcoin network.

As you can see from the second chart, Bitcoin is a massive waste of energy versus our existing methods of payment. But what about other blockchain-based technologies, like Ethereum?

They've had the same problem, until recently, as Peter Fairley explains for IEEE Spectrum:

Like most cryptocurrencies, Ethereum relies on a computational competition called proof of work (PoW) . In PoW, all participants race to cryptographically secure transactions and add them to the blockchain’s globally distributed ledger. It’s a winner-takes-all contest, rewarded with newly minted cryptocoins. So the more computational firepower you have, the better your chances to profit.

[...]

Ethereum’s plan is to replace PoW with proof of stake (PoS)—an alternative mechanism for distributed consensus that was first applied to a cryptocurrency with the launch of Peercoin in 2012. Instead of millions of processors simultaneously processing the same transactions, PoS randomly picks one to do the job.

In PoS, the participants are called validators instead of miners, and the key is keeping them honest. PoS does this by requiring each validator to put up a stake—a pile of ether in Ethereum’s case—as collateral. A bigger stake earns a validator proportionately more chances at a turn, but it also means that a validator caught cheating has lots to lose.

Peter Fairley

Which brings us back to credentials. As I've said many times before, if you trust online banking and online shopping, then the Open Badges standard is secure enough for you. However, I can still see a use case for blockchain-based credentials, and wouldn't necessarily rule them out — especially if they're based on a PoS approach.

5. How educators can promote their good practices without looking like they're bragging

This is really contextual. What counts as 'bragging' in one culture and within one community won't be counted as such in another. It also depends on personality too, I guess ⁠— something we don't really talk about as educators (other than through the lens of 'character').

The only advice I can give is to do these three things:

  1. Keep showing up in the same spaces every day/week so that people know where to find you (online/offline)
  2. Share your work without caring about recognition
  3. Point to other people and both recognise and celebrate their contributions

Remember, the point is to make the world a better place, not to care who gets credit for making it better!

6. Why the last episode of Game of Thrones was so very bad

I've never even seen part of one episode, so perhaps this can help?

[www.youtube.com/watch](https://www.youtube.com/watch?v=4GdWD0yxvqw)

Do you have any questions for me to answer next time I do this?

The habit of sardonic contemplation is the hardest habit of all to break

Angela Carter with the story of my life there. I can't help but be skeptical about 'Libra', Facebook's new crytocurrency project. I'm skeptical about almost all cryptocurrencies, to be honest.

The website is marketing. It's all about 'empowering' the 'unbanked' worldwide. However, let's dive into the white paper:

Members of the Libra Association will consist of geographically distributed and diverse businesses, nonprofit and multilateral organizations, and academic institutions. The initial group of organizations that will work together on finalizing the association’s charter and become “Founding Members” upon its completion are, by industry:

  • Payments: Mastercard, PayPal, PayU (Naspers’ fintech arm), Stripe, Visa
  • Technology and marketplaces: Booking Holdings, eBay, Facebook/Calibra, Farfetch, Lyft, MercadoPago, Spotify AB, Uber Technologies, Inc.
  • Telecommunications: Iliad, Vodafone Group
    Blockchain: Anchorage, Bison Trails, Coinbase, Inc., Xapo Holdings Limited
  • Venture Capital: Andreessen Horowitz, Breakthrough Initiatives, Ribbit Capital, Thrive Capital, UnionSquare Ventures
  • Nonprofit and multilateral organizations, and academic institutions: Creative Destruction Lab, Kiva,Mercy Corps, Women’s World Banking

We hope to have approximately 100 members of the Libra Association by the target launch in the first half of 2020.

So, all the usual suspects. How will Facebook ensure that we don't see the crazy price volatility we've seen with other cryptocurrencies?

Libra is designed to be a stable digital cryptocurrency that will be fully backed by a reserve of real assets — the Libra Reserve — and supported by a competitive network of exchanges buying and selling Libra. That means anyone with Libra has a high degree of assurance they can convert their digital currency into local fiat currency based on an exchange rate, just like exchanging one currency for another when traveling. This approach is similar to how other currencies were introduced in the past: to help instill trust in a new currency and gain widespread adoption during its infancy, it was guaranteed that a country’s notes could be traded in for real assets, such as gold. Instead of backing Libra with gold, though, it will be backed by a collection of low-volatility assets, such as bank deposits and short-term government securities in currencies from stable and reputable central banks.

So it sounds like all of the value is being extracted by founding members. Now let's move onto the technology. Any surprises there? Nope.

Blockchains are described as either permissioned or permissionless in relation to the ability to participate as a validator node. In a “permissioned blockchain,” access is granted to run a validator node. In a “permissionless blockchain,” anyone who meets the technical requirements can run a validator node. In that sense, Libra will start as a permissioned blockchain.

This is as conservative as they come, which is exactly what your strategy would be if you're trying to transfer the entire monetary system to one that you control. People often joke about Facebook as 'social infrastructure', but this is a level beyond. This is Facebook as financial infrastructure.

Given both current and potential future regulatory oversight, Facebook are very careful to distance themselves from Libra. In fact, the website proudly states that, "The Libra Association is an independent, not-for-profit membership organization, headquartered in Geneva, Switzerland."

To be fair,Josh Constine, writing for TechCrunch, notes that Facebook only gets one vote as a founding member of the Libra Association. It does actually look like they're in it for the long-haul:

In cryptocurrencies, Facebook saw both a threat and an opportunity. They held the promise of disrupting how things are bought and sold by eliminating transaction fees common with credit cards. That comes dangerously close to Facebook’s ad business that influences what is bought and sold. If a competitor like Google or an upstart built a popular coin and could monitor the transactions, they’d learn what people buy and could muscle in on the billions spent on Facebook marketing. Meanwhile, the 1.7 billion people who lack a bank account might choose whoever offers them a financial services alternative as their online identity provider too. That’s another thing Facebook wants to be.

John Constine

Whereas before there's always been social pressure to have a Facebook account, now there could be pressures that span identity and economic necessities, too.

Some good commentary on the hurdles ahead comes from Kari Paul for The Guardian, who writes:

The company claims it will not attempt to bypass existing regulation but instead “innovate” on regulatory fronts. Libra will use the same verification and anti-fraud processes that banks and credit cards use and will implement automated systems to detect fraud, Facebook said in its launch. It also promised to give refunds to any users who are hacked or have Libra stolen from their digital wallets.

Kari Paul

Would this be the same kind of 'innovation' that Uber uses to muscle its way into cities without a license? Or to muscle its way into cities without a license? Perhaps it's the shady business practices beloved of PayPal? Both companies are founding members, after all!

Right now, developers can get access to a 'test network' for Libra. The system itself won't be running until the end of 2020, so there's a lot speculation. Here's some sources I found useful, but you'll need to make up your own mind. Is this a good thing?

To be perfectly symmetrical is to be perfectly dead

So said Igor Stravinsky. I'm a little behind on my writing, and prioritised writing up my experiences in the Lake District over the past couple of days.

Today's update is therefore a list post:

  • Degrowth: a Call for Radical Abundance (Jason Hickel) — "In other words, the birth of capitalism required the creation of scarcity. The constant creation of scarcity is the engine of the juggernaut."
  • Hey, You Left Something Out (Cogito, Ergo Sumana) — "People who want to compliment work should probably learn to give compliments that sound encouraging."
  • The Problem is Capitalism (George Monbiot) — "A system based on perpetual growth cannot function without peripheries and externalities. There must always be an extraction zone, from which materials are taken without full payment, and a disposal zone, where costs are dumped in the form of waste and pollution."
  • In Stores, Secret Surveillance Tracks Your Every Move (The New York Times) — "For years, Apple and Google have allowed companies to bury surveillance features inside the apps offered in their app stores. And both companies conduct their own beacon surveillance through iOS and Android."
  • The Inevitable Same-ification of the Internet
    (Matthew Ström) — "Convergence is not the sign of a broken system, or a symptom of a more insidious disease. It is an emergent phenomenon that arises from a few simple rules."


Life doesn’t depend on any one opinion, any one custom, or any one century

Baltasar Gracián was a 17th-century Spanish Jesuit who put together a book of aphorisms usually translated The Pocket Oracle and Art of Prudence or simply The Art of Worldly Wisdom. It's one of a few books that have had a very large effect on my life. Today's quotation-as-title comes from him.

The historian in me wonders about why we seem to live in such crazy times. My simple answer is 'the internet', but I want to dig into a bit using an essay from Scott Alexander:

[T]oday we have an almost unprecedented situation.

We have a lot of people... boasting of being able to tolerate everyone from every outgroup they can imagine, loving the outgroup, writing long paeans to how great the outgroup is, staying up at night fretting that somebody else might not like the outgroup enough.

This is really surprising. It’s a total reversal of everything we know about human psychology up to this point. No one did any genetic engineering. No one passed out weird glowing pills in the public schools. And yet suddenly we get an entire group of people who conspicuously promote and defend their outgroups, the outer the better.

What is going on here?

Scott Alexander

It's long, and towards the end, Alexander realises that he's perhaps guilty of the very thing he's pointing out. Nevertheless, his definition of an 'outgroup' is useful:

So what makes an outgroup? Proximity plus small differences. If you want to know who someone in former Yugoslavia hates, don’t look at the Indonesians or the Zulus or the Tibetans or anyone else distant and exotic. Find the Yugoslavian ethnicity that lives closely intermingled with them and is most conspicuously similar to them, and chances are you’ll find the one who they have eight hundred years of seething hatred toward.

Scott Alexander

Over the last three years in the UK, we've done a spectacular job of adding a hatred of the opposing side in the Brexit debate to our national underlying sense of xenophobia . What's necessary next is to bring everyone together and, whether we end up leaving the EU or not, forging a new narrative.

As Bryan Caplan points out, such efforts at cohesion need to be approached obliquely. He uses the example of American politics, but it applies equally elsewhere, including the UK:

Suppose you live in a deeply divided society: 60% of people strongly identify with Group A, and the other 40% strongly identify with Group B. While you plainly belong to Group A, you’re convinced this division is bad: It would be much better if everyone felt like they belonged to Group AB. You seek a cohesive society, where everyone feels like they’re on the same team.

What’s the best way to bring this cohesion about? Your all-too-human impulse is to loudly preach the value of cohesion. But on reflection, this is probably counter-productive. When members of Group B hear you, they’re going to take “cohesion” as a euphemism for “abandon your identity, and submit to the dominance of Group A.” None too enticing. And when members of Group A notice Group B’s recalcitrance, they’re probably going to think, “We offer Group B the olive branch of cohesion, and they spit in our faces. Typical.” Instead of forging As and Bs into one people, preaching cohesion tears them further apart.

Bryan Caplan

So, what can we do? Caplan suggests that members of one side should go out of their way to be overwhelmingly positive and friendly to the other side:

The first rule of promoting cohesion is: Don’t talk about cohesion. The second rule of promoting cohesion is: Don’t talk about cohesion. If you really want to build a harmonious, unified society, take one for the team. Discard your anger, swallow your pride, and show out-groups unilateral respect and friendship. End of story.

Bryan Caplan

It reminds me of the Christian advice to "turn the other cheek" which must have melted the brains of those listening to Jesus who were used to the Old Testament approach:

“You have heard that it was said, ‘An eye for an eye and a tooth for a tooth.’ But I say to you, Do not resist the one who is evil. But if anyone slaps you on the right cheek, turn to him the other also. And if anyone would sue you and take your tunic, let him have your cloak as well.

Matthew 5:38-40 (ESV)

Over the last 20 years, as the internet has played an ever-increasing role in our daily lives, we've seen a real ramping-up of the feminist movement, gay marriage becoming the norm in civilised western democracies, and movements like #BlackLivesMatter reminding us of just how racist our societies are.

In addition, despite the term being coined as long ago as 1989, we've seen a rise in awareness around intersectionality. It's not exactly a radical notion to say that us being more connected leads to more awareness of 'outgroups'. What is interesting is the way that we choose to deal with that.

Let's have a quick look at the demographics from the Brexit vote three years ago:

Brexit demographics from The Guardian
Brexit demographics from The Guardian

Remain voters were, on the whole, younger, better educated, and more well-off than Leave voters. They were also slightly more likely to be born outside the UK. I haven't done the research, but I just have a feeling that the generational differences here are to do with relative exposure to outgroups.

What's more interesting than the result of the referendum itself, of course, is the reaction since then, with both 'Leavers' and 'Remainers' digging in to their entrenched positions. Now we've created new outgroups, we can join together in welcoming in the old outgroups. Hence LGBT+ pride rainbows in shops and everywhere else.

As I explained five years ago, one of the problems is that we're not collectively aware enough of the role money plays in our democratic processes and information landscapes:

The problem with social networks as news platforms is that they are not neutral spaces. Perhaps the easiest way to get quickly to the nub of the issue is to ask how they are funded. The answer is clear and unequivocal: through advertising. The two biggest social networks, Twitter and Facebook (which also owns Instagram and WhatsApp), are effectively “services with shareholders.” Your interactions with other people, with media, and with adverts, are what provide shareholder value. Lest we forget, CEOs of publicly-listed companies have a legal obligation to provide shareholder value. In an advertising-fueled online world this means continually increasing the number of eyeballs looking at (and fingers clicking on) content.

Doug Belshaw

Sadly, in the west we invested in Computing to the detriment of critical digital literacies at exactly the wrong moment. That investment should have come on top of a real push to help everyone in society realise the importance of questioning and reflecting on their information environment.

Much as some people might like to, we can't put the internet back in a box. It's connected us all, for better and for worse, in ways that only a few would have foreseen. It's changing the way we interact with one another, the way we buy things, and the way we think about education, work, and human flourishing.

All these connections might mean that style of representative democracy we're currently used to might need tweaking. As Jamie Bartlett points out in The People vs Tech, "these are spiritual as well as technical questions".


Also check out:

  • There is nothing more depressing than “positive news” (The Outline) — "The world is often a bummer, but a whole ecosystem of podcasts and Facebook pages have sprung up to assure you that things are actually great."
  • Space for More Spaces (CogDogBlog) — "I still hold on to the idea that those old archaic, pre-social media constructs, a personal blog, is the main place, the home, to operate from."
  • Clay Shirky on Mega-Universities and Scale (Phil on EdTech) — "What the mega-university story gets right is that online education is transforming higher education. What it gets wrong is the belief that transformation must end with consolidation around a few large-scale institutions"

Friday feastings

These are things I came across that piqued my attention:

  • What do cats do all day? (The Kid Should See This) — "Catcam footage from collar cameras captured the activities of 16 free-roaming domestic cats in England as they explored, stared, touched noses, hunted, vocalized, and more."
  • These researchers invented an entirely new way of building with wood (Fast Company) — "Each of the 12 wooden components of the tower was made by laminating two pieces of wood with different levels of moisture. Then, when the laminated pieces of wood dried out, the piece of wood curved naturally–no molds or braces needed."
  • What Did Old English Sound Like? Hear Reconstructions of Beowulf, The Bible, and Casual Conversations (Open Culture) — "Over the course of 1000 years, the language came together from extensive contact with Anglo-Norman, a dialect of French; then became heavily Latinized and full of Greek roots and endings; then absorbed words from Arabic, Spanish, and dozens of other languages, and with them, arguably, absorbed concepts and pictures of the world that cannot be separated from the language itself."
  • Adversarial interoperability: reviving an elegant weapon from a more civilized age to slay today's monopolies (BoingBoing) — "This kind of adversarial interoperability goes beyond the sort of thing envisioned by "data portability," which usually refers to tools that allow users to make a one-off export of all their data, which they can take with them to rival services. Data portability is important, but it is no substitute for the ability to have ongoing access to a service that you're in the process of migrating away from."
  • Fables of School Reform (The Baffler) — "Even pre-internet efforts to upgrade the technological prowess of American schools came swathed in the quasi-millennial promise of complete school transformation."

Even in their sleep men are at work

For today's title I've used Marcus Aurelius' more concise, if unfortunately gendered, paraphrasing of a slightly longer quotation from Heraclitus. It's particularly relevant to me at the moment, as recently I've been sleepwalking. This isn't a new thing; I've been doing it all my life when something's been bothering me.

When I tell people about this, they imagine something similar to the cartoon above. The reality is somewhat more banal, with me waking up almost as soon as I get out of bed and then getting back into it.

Sometimes I'm not entirely sure what's bothering me. Other times I do, but it's a combintion of things. In an article for Inc. Amy Morin gives some advice, explains there's an important difference between 'ruminating' and 'problem-solving':

If you're behind on your bills, thinking about how to get caught up can be helpful. But imagining yourself homeless or thinking about how unfair it is that you got behind isn't productive.

So ask yourself, "Am I ruminating or problem-solving?"
If you're dwelling on the problem, you're ruminating. If you're actively looking for solutions, you're problem-solving.

Amy Morin

Morin goes on to talk about 'changing the channel' which can be a very difficult thing to do. One thing that helps me is reading the work of Stoic philosophers such as The Enchiridion by Epictetus, which begins with some of the best advice I've ever read:

Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our own actions.

The things in our control are by nature free, unrestrained, unhindered; but those not in our control are weak, slavish, restrained, belonging to others. Remember, then, that if you suppose that things which are slavish by nature are also free, and that what belongs to others is your own, then you will be hindered. You will lament, you will be disturbed, and you will find fault both with gods and men. But if you suppose that only to be your own which is your own, and what belongs to others such as it really is, then no one will ever compel you or restrain you. Further, you will find fault with no one or accuse no one. You will do nothing against your will. No one will hurt you, you will have no enemies, and you not be harmed.

Aiming therefore at such great things, remember that you must not allow yourself to be carried, even with a slight tendency, towards the attainment of lesser things. Instead, you must entirely quit some things and for the present postpone the rest. But if you would both have these great things, along with power and riches, then you will not gain even the latter, because you aim at the former too: but you will absolutely fail of the former, by which alone happiness and freedom are achieved.

Work, therefore to be able to say to every harsh appearance, "You are but an appearance, and not absolutely the thing you appear to be." And then examine it by those rules which you have, and first, and chiefly, by this: whether it concerns the things which are in our own control, or those which are not; and, if it concerns anything not in our control, be prepared to say that it is nothing to you.

Epictetus

Donald Robertson, founder of Modern Stoicism, is an author and psychotherapist. Robertson was interviewed by Knowledge@Wharton for their podcast, which they've also transcribed. He makes a similar point to Epictetus, based on the writings of Marcus Aurelius:

Ultimately, the only thing that’s really under our control is our own will, our own actions. Things happen to us, but what we can really control is the way that we respond to those things. Stoicism wants us to take also greater responsibility, greater ownership for the things that we can actually do, both in terms of our thoughts and our actions, and respond to the situations that we face.

Donald Robertson

Robertson talks in the interview about how Stoicism has helped him personally:

It’s helped me to cope with a lot of things, even relatively trivial things. The last time I went to the dentist, I’m sure I was using Stoic pain management techniques. It becomes a habitual thing. Coping with some of the stress that therapists have when they’re dealing with clients who sometimes describe very traumatic problems, and the stress of working with other people who have their difficulties and stresses. [I moved] to Canada a few years ago, and that was a big upheaval for me. As for many people, a life-changing event like that can require a lot to deal with. Learning to think about things like a Stoic has helped me to negotiate all of these things in life.

Donald Robertson

Although I haven't done it since August 2010(!) I used to do something which I referred to as "calling myself into the office". The idea was that I'd set myself three to five goals, and then review them at the end of the month. I'd also set myself some new goals.

The value of doing this is that you can see that you're making progress. It's something that I should definitely start doing again. I was reminded of this approach after reading an article at Career Contessa about weekly self-evaluations. The suggested steps are:

  1. Celebrate your wins
  2. Address your losses or weaknesses
  3. Note your "coulda, woulda, shoulda" tasks
  4. Create goals for next week
  5. Summarise it all in one sentence

While Career Contessa suggests this will all take only five minutes, I think that if you did it properly it might take more like 20 minutes to half an hour. Whether you do it weekly or monthly probably depends on the size of the goals you're trying to achieve. Either way, it's a valuable exercise.

We all need to cut ourselves some slack, to go easy on ourselves. The chances are that the thing we're worrying about isn't such a big deal in the scheme of things, and the world won't end if we don't get that thing done right now. Perhaps regular self-examination, whether through Stoicism or weekly/monthly reviews, can help more of us with that?


Also check out:

  • Trying (Snakes and Ladders) — "I realized that one of the reasons I like doing the newsletter so much is that I have (quite unconsciously) understood it as a place not to do analysis or critique but to share things that give me delight."
  • 43 — All in & with the flow (Buster Benson) — "It’s tempting to always rationalize why our current position is optimal, but as I get older it’s a lot easier to see how things move in cycles, and the cycles themselves are what we should pay attention to more than where we happen to be in them at the moment.
  • Four Ways to Figure Out What You Really Want to Do with Your Life (Lifehacker) — "In the end, figuring out your passion, your career path, your life purpose—whatever you want to call it—isn’t an easy process and no magic bullet exists for doing it."

The proper amount of wealth is that which neither descends to poverty nor is far distant from it

So said Seneca, in a quotation I found via the consistently-excellent New Philosopher magazine. In my experience, 'wealth' is a relative concept. I've met people who are, to my mind, fabulously well-off, but don't feel it because their peers are wealthier. Likewise, I've met people who aren't materially well-off, but don't realise they're poor because their friends and colleagues are too.

Let's talk about inequality. Cory Doctorow, writing for BoingBoing, points to an Institute for Fiscal Studies report (PDF) by Robert Joyce and Xiaowei Xu that is surprisingly readable. They note cultural differences around inequality and its link to (perceived) meritocracy: 

A recent experiment found that people were much more accepting of inequality when it resulted from merit instead of luck (Almas, Cappelen and Tungodden, 2019). Given the opportunity to redistribute gains to others, people were significantly less likely to do so when differences in gains reflected differences in productivity. The experiment also revealed differences between countries in people’s views of what is fair, with more Norwegians opting for redistribution even when gains were merit-based and more Americans accepting inequality even when outcomes were due to luck.

This suggests that to understand whether inequality is a problem, we need to understand the sources of inequality, views of what is fair and the implications of inequality as well as the levels of inequality. Are present levels of inequalities due to well-deserved rewards or to unfair bargaining power, regulatory failure or political capture? Can meritocracy be unfair? What is the moral status of luck? And what if inequalities derived from a fair process in one generation are transmitted on to future generations?

Robert Joyce and Xiaowei Xu

Can meritocracy be unfair? Yes, of course it can, as I pointed out in this article from a few years back. To quote myself:

I’d like to see meritocracy consigned to the dustbin of history as an outdated approach to society. At a time in history when we seek to be inclusive, to recognise and celebrate diversity, the use of meritocratic practices seems reactionary and regressive. Meritocracy applies a one-size-fits-all, cookie-cutter approach that — no surprises here — just happens to privilege those already in positions of power.

Doug Belshaw

Doctorow also cites Chris Dillow, who outlines in a blog post eight reasons why inequality makes us poorer. Dillow explains that "what matters is not so much the level of inequality as the effect it has". I've attempted to summarise his reasons below:

  1. "Inequality encourages the rich to invest not innovation but in... means of entrenching their privilege and power"
  2. "Unequal corporate hierarchies can demotivate junior employees"
  3. "Economic inequality leads to less trust"
  4. "Inequality can prevent productivity-enhancing change"
  5. "Inequality can cause the rich to be fearful of future redistribution or nationalization, which will make them loath to invest"
  6. "Inequalities of power... have allowed governments to abandon the aim of truly full employment and given firms more ability to boost profits by suppressing wages and conditions [which] has disincentivized investments in labour-saving technologies"
  7. "High-powered incentives that generate inequality within companies can backfire... [as] they encourage bosses to hit measured targets and neglect less measurable things"
  8. "High management pay can entrench... the 'forces of conservatism' which are antagonistic to technical progress"

Meanwhile, Eleanor Ainge Roy reports for The Guardian that the New Zealand government has unveiled a 'wellbeing budget' focused on "mental health services and child poverty as well as record investment in measures to tackle family violence". Their finance minister is quoted by Roy as saying:

For me, wellbeing means people living lives of purpose, balance and meaning to them, and having the capabilities to do so.

This gap between rhetoric and reality, between haves and have-nots, between the elites and the people, has been exploited by populists around the globe.

Grant Robertson

Thankfully, we don't have to wait for government to act on inequality. We can seize the initiative ourselves through co-operation. In The Boston Globe, Andy Rosen explains that different ways of organising are becoming more popular:

The idea has been percolating for a while in some corners of the tech world, largely as a response to the gig economy, in which workers are often considered contractors and don’t get the same protections and benefits as employees. In New York, for example, Up & Go, a kind of Uber for house cleaning, is owned by the cleaners who provide the services.

[...]

People who have followed the co-op movement say the model, and a broader shift toward increased employee and consumer control, is likely to become more prominent in coming years, especially as aging baby boomers look for socially responsible ways to cash out and retire by selling their companies to groups of employees.

ANdy Rosen

Some of the means by which we can make society a fairer and more equal place come through government intervention at the policy level. But we should never forget the power we have through self-organising and co-operating together.


Also check out:

Situations can be described but not given names

So said that most enigmatic of philosophers, Ludwig Wittgenstein. Today's article is about the effect of external stimulants on us as human beings, whether or not we can adequately name them.

Let's start with music, one of my favourite things in all the world. If the word 'passionate' hadn't been devalued from rampant overuse, I'd say that I'm passionate about music. One of the reasons is because it produces such a dramatic physiological response in me; my hairs stand on end and I get a surge of endophins — especially if I'm also running.

That's why Greg Evans' piece for The Independent makes me feel quite special. He reports on (admittedly small-scale) academic research which shows that some people really do feel music differently to others:

Matthew Sachs a former undergraduate at Harvard, last year studied individuals who get chills from music to see how this feeling was triggered.

The research examined 20 students, 10 of which admitted to experiencing the aforementioned feelings in relation to music and 10 that didn't and took brain scans of all of them all.

He discovered that those that had managed to make the emotional and physical attachment to music actually have different brain structures than those that don't.

The research showed that they tended to have a denser volume of fibres that connect their auditory cortex and areas that process emotions, meaning the two can communicate better.

Greg Evans

This totally makes sense to me. I'm extremely emotionally invested in almost everything I do, especially my work. For example, I find it almost unbearably difficult to work on something that I don't agree with or think is important.

The trouble with this, of course, and for people like me, is that unless we're careful we're much more likely to become 'burned out' by our work. Nate Swanner reports for Dice that the World Health Organisation (WHO) has recently recognised burnout as a legitimate medical syndrome:

The actual definition is difficult to pin down, but the WHO defines burnout by these three markers:

  • Feelings of energy depletion or exhaustion.
  • Increased mental distance from one’s job, or feelings of negativism or cynicism related to one’s job.
  • Reduced professional efficacy.

Interestingly enough, the actual description of burnout asks that all three of the above criteria be met. You can’t be really happy and not producing at work; that’s not burnout.

As the article suggests, now burnout is a recognised medical term, we now face the prospect of employers being liable for causing an environment that causes burnout in their employees. It will no longer, hopefully, be a badge of honour to have burned yourself out for the sake of a venture capital-backed startup.

Having experienced burnout in my twenties, the road to recovery can take a while, and it has an effect on the people around you. You have to replace negative thoughts and habits with new ones. I ultimately ended up moving both house and sectors to get over it.

As Jason Fried notes on Signal v. Noise, we humans always form habits:

When we talk about habits, we generally talk about learning good habits. Or forming good habits. Both of these outcomes suggest we can end up with the habits we want. And technically we can! But most of the habits we have are habits we ended up with after years of unconscious behavior. They’re not intentional. They’ve been planting deep roots under the surface, sight unseen. Fertilized, watered, and well-fed by recurring behavior. Trying to pull that habit out of the ground later is going to be incredibly difficult. Your grip has to be better than its grip, and it rarely is.

Jason Fried

This is a great analogy. It's easy for weeds to grow in the garden of our mind. If we're not careful, as Fried points out, these can be extremely difficult to get rid of once established. That's why, as I've discussed before, tracking one's habits is itself a good habit to get into.

Over a decade ago, a couple of years after suffering from burnout, I wrote a post outlining what I rather grandly called The Vortex of Uncompetence. Let's just say that, if you recognise yourself in any of what I write in that post, it's time to get out. And quickly.


Also check out:

  • Your Kids Think You’re Addicted to Your Phone (The New York Times) — "Most parents worry that their kids are addicted to the devices, but about four in 10 teenagers have the same concern about their parents."
  • Why the truth about our sugar intake isn't as bad as we are told (New Scientist) — "In fact, the UK government 'Family food datasets', which have detailed UK household food and drink expenditure since 1974, show there has been a 79 per cent decline in the use of sugar since 1974 – not just of table sugar, but also jams, syrups and honey."
  • Can We Live Longer But Stay Younger? (The New Yorker) — "Where fifty years ago it was taken for granted that the problem of age was a problem of the inevitable running down of everything, entropy working its worst, now many researchers are inclined to think that the problem is “epigenetic”: it’s a problem in reading the information—the genetic code—in the cells."

There’s no perfection where there’s no selection

So said Baltasar Gracián. One of the reasons that e-portfolios never really took off was because there's so much to read. Can you imagine sifting through hundreds of job applications where each applicant had a fully-fledged e-portfolio, including video content?

That's why I've been so interested in Open Badges, and have written plenty on the subject over the last eight years. If you're new to the party, there are various terms such as 'microcredentials', 'digital badges', and 'digital credentials'. The difference is in the standard which was previously stewarded by Mozilla (including at my time there) and now by IMS Global Learning Consortium.

When I left Mozilla, I did a lot of work with City & Guilds, an awarding body that's well known for its vocational qualifications. They took a particular interest in Open Badges, for obvious reasons. In this article for FE News, Kirstie Donnelly (Managing Director of the City & Guilds Group) explains their huge potential:

The fact that you can actually stack these credentials, and they become portable, then you can publish them through online, through your LinkedIn. I just think it puts a very different dynamic into how the learner owns their experience, but at the same time the employers and the education system can still influence very much how those credentials are built and stacked.

Kirstie Donnelly

Like it or not, a lot of education is 'signalling' — i.e. providing an indicator that you can do a thing. The great thing about Open Badges is that you can make credentials much more granular and, crucially, include evidence of your ability to do the thing you claim to be able to do.

As Tyler Cowen picks up on for Marginal Revolution, without this granularity, there's a knock-on effect upon societal inequality. Privilege is perpetuated. He quotes a working paper by Gaurab Aryal, Manudeep Bhuller, and Fabian Lange who state:

The social and the private returns to education differ when education can increase productivity and also be used to signal productivity. We show how instrumental variables can be used to separately identify and estimate the social and private returns to education within the employer learning framework of Farber and Gibbons (1996) and Altonji and Pierret (2001). What an instrumental variable identifies depends crucially on whether the instrument is hidden from or observed by the employers. If the instrument is hidden, it identifies the private returns to education, but if the instrument is observed by employers, it identifies the social returns to education.

Aryal, Bhuller, and Lange

I take this to mean that, in a marketplace, the more the 'buyers' (i.e. employers) understand what's on offer, the more this changes the way that 'sellers' (i.e. potential employees) position themselves. Open Badges and other technologies can help with this.

Understandably, a lot is made of digital credentials for recruitment. Indeed, I've often argued that badges are important at times of transition — whether into a job, on the job, or onto your next job. But they are also important for reasons other than employment.

Lauren Acree, writing for Digital Promise explains how they can be used to foster more inclusive classrooms:

The Learner Variability micro-credentials ask educators to better understand students as learners. The micro-credentials support teachers as they partner with students in creating learning environments that address learners’ needs, leverage their strengths, and empower students to reflect and adjust as needed. We found that micro-credentials are one important way we can ultimately build teacher capacity to meet the needs of all learners.

Lauren Acree

The article includes this image representing a taxonomy of how teachers use micro-credentials in their work:

If we zoom out even further, we can see that micro-credentials as a form of 'currency' could play a big role in how we re-imagine society. Tim Riches, who I collaborated with while at both Mozilla and City & Guilds, has written a piece for the RSA about the 'Cities of Learning' projects that he's been involved in. All of these have used badges in some form or other.

In formal education, the value of learning is measured in qualifications. However, qualifications only capture a snapshot of what we know, not what we can do. What’s more, they tend to measure routine skills - the ones most vulnerable to automation and outsourcing.

[...]

Cities are full of people with unrecognised talents and potential. Cities are a huge untapped resource. Skills are developed every day in the community, at work and online, but they are hidden from view - disconnected from formal education and employers.

Tim Riches

I don't live in a city, and don't necessarily see them as the organising force here, but I do think that, on a societal level, there's something about recognising potential. Tim includes a graphic in his article which, I think, captures this nicely:

There's a phrase that's often used by feminist writers: "you can't be what you can't see". In other words, if you don't have any role models in a particular area, you're unlikely to think of exploring it. Similarly, if you don't know anyone who's a lawyer, or a sailor, or a horse rider, it's not perhaps something you'd think of doing.

If we can wrest control of innovations such as Open Badges away from the incumbents, and focus on human flourishing, I can see real opportunities for what Serge Ravet and others call 'open recognition'. Otherwise, we're just co-opting them to prop up and perpetuate the existing, unequal system.


Also check out:

Friday fathomings

I enjoyed reading these:


Image via Indexed

There’s no viagra for enlightenment

This quotation from the enigmatic Russell Brand seemed appropriate for the subject of today's article: the impact of so-called 'deepfakes' on everything from porn to politics.

First, what exactly are 'deepfakes'? Mark Wilson explains in an article for Fast Company:

In early 2018, [an anonymous Reddit user named Deepfakes] uploaded a machine learning model that could swap one person’s face for another face in any video. Within weeks, low-fi celebrity-swapped porn ran rampant across the web. Reddit soon banned Deepfakes, but the technology had already taken root across the web–and sometimes the quality was more convincing. Everyday people showed that they could do a better job adding Princess Leia’s face to The Force Awakens than the Hollywood special effects studio Industrial Light and Magic did. Deepfakes had suddenly made it possible for anyone to master complex machine learning; you just needed the time to collect enough photographs of a person to train the model. You dragged these images into a folder, and the tool handled the convincing forgery from there.

Mark Wilson

As you'd expect, deepfakes bring up huge ethical issues, as Jessica Lindsay reports for Metro. It's a classic case of our laws not being able to keep up with what's technologically possible:

With the advent of deepfake porn, the possibilities have expanded even further, with people who have never starred in adult films looking as though they’re doing sexual acts on camera.

Experts have warned that these videos enable all sorts of bad things to happen, from paedophilia to fabricated revenge porn.

[...]

This can be done to make a fake speech to misrepresent a politician’s views, or to create porn videos featuring people who did not star in them.

Jessica Lindsay

It's not just video, either, with Google's AI now able to translate speech from one language to another and keep the same voice. Karen Hao embeds examples in an article for MIT Technology Review demonstrating where this is all headed.

The results aren’t perfect, but you can sort of hear how Google’s translator was able to retain the voice and tone of the original speaker. It can do this because it converts audio input directly to audio output without any intermediary steps. In contrast, traditional translational systems convert audio into text, translate the text, and then resynthesize the audio, losing the characteristics of the original voice along the way.

Karen Hao

The impact on democracy could be quite shocking, with the ability to create video and audio that feels real but is actually completely fake.

However, as Mike Caulfield notes, the technology doesn't even have to be that sophisticated to create something that can be used in a political attack.

There’s a video going around that purportedly shows Nancy Pelosi drunk or unwell, answering a question about Trump in a slow and slurred way. It turns out that it is slowed down, and that the original video shows her quite engaged and articulate.

[...]

In musical production there is a technique called double-tracking, and it’s not a perfect metaphor for what’s going on here but it’s instructive. In double tracking you record one part — a vocal or solo — and then you record that part again, with slight variations in timing and tone. Because the two tracks are close, they are perceived as a single track. Because they are different though, the track is “widened” feeling deeper, richer. The trick is for them to be different enough that it widens the track but similar enough that they blend.

Mike Caulfield

This is where blockchain could actually be a useful technology. Caulfield often talks about the importance of 'going back to the source' — in other words, checking the provenance of what it is you're reading, watching, or listening. There's potential here for checking that something is actually the original document/video/audio.

Ultimately, however, people believe what they want to believe. If they want to believe Donald Trump is an idiot, they'll read and share things showing him in a negative light. It doesn't really matter if it's true or not.


Also check out:

Wretched is a mind anxious about the future

So said one of my favourite non-fiction authors, the 16th century proto-blogger Michel de Montaigne. There's plenty of writing about how we need to be anxious because of the drift towards a future of surveillance states. Eventually, because it's not currently affecting us here and now, we become blasé. We forget that it's already the lived experience for hundreds of millions of people.

Take China, for example. In The Atlantic, Derek Thompson writes about the Chinese government's brutality against the Muslim Uyghur population in the western province of Xinjiang:

[The] horrifying situation is built on the scaffolding of mass surveillance. Cameras fill the marketplaces and intersections of the key city of Kashgar. Recording devices are placed in homes and even in bathrooms. Checkpoints that limit the movement of Muslims are often outfitted with facial-recognition devices to vacuum up the population’s biometric data. As China seeks to export its suite of surveillance tech around the world, Xinjiang is a kind of R&D incubator, with the local Muslim population serving as guinea pigs in a laboratory for the deprivation of human rights.

Derek Thompson

As Ian Welsh points out, surveillance states usually involve us in the West pointing towards places like China and shaking our heads. However, if you step back a moment and remember that societies like the US and UK are becoming more unequal over time, then perhaps we're the ones who should be worried:

The endgame, as I’ve been pointing out for years, is a society in which where you are and what you’re doing, and have done is, always known, or at least knowable. And that information is known forever, so the moment someone with power wants to take you out, they can go back thru your life in minute detail. If laws or norms change so that what was OK 10 or 30 years ago isn’t OK now, well they can get you on that.

Ian Welsh

As the world becomes more unequal, the position of elites becomes more perilous, hence Silicon Valley billionaires preparing boltholes in New Zealand. Ironically, they're looking for places where they can't be found, while making serious money from providing surveillance technology. Instead of solving the inequality, they attempt to insulate themselves from the effect of that inequality.

A lot of the crazy amounts of money earned in Silicon Valley comes at the price of infringing our privacy. I've spent a long time thinking about quite nebulous concept. It's not the easiest thing to understand when you examine it more closely.

Privacy is usually considered a freedom from rather than a freedom to, as in "freedom from surveillance". The trouble is that there are many kinds of surveillance, and some of these we actively encourage. A quick example: I know of at least one family that share their location with one another all of the time. At the same time, of course, they're sharing it with the company that provides that service.

There's a lot of power in the 'default' privacy settings devices and applications come with. People tend to go with whatever comes as standard. Sidney Fussell writes in The Atlantic that:

Many apps and products are initially set up to be public: Instagram accounts are open to everyone until you lock them... Even when companies announce convenient shortcuts for enhancing security, their products can never become truly private. Strangers may not be able to see your selfies, but you have no way to untether yourself from the larger ad-targeting ecosystem.

Sidney Fussell

Some of us (including me) are willing to trade some of that privacy for more personalised services that somehow make our lives easier. The tricky thing is when it comes to employers and state surveillance. In these cases there are coercive power relationships at play, rather than just convenience.

Ellen Sheng, writing for CNBC explains how employees in the US are at huge risk from workplace surveillance:

In the workplace, almost any consumer privacy law can be waived. Even if companies give employees a choice about whether or not they want to participate, it’s not hard to force employees to agree. That is, unless lawmakers introduce laws that explicitly state a company can’t make workers agree to a technology...

One example: Companies are increasingly interested in employee social media posts out of concern that employee posts could reflect poorly on the company. A teacher’s aide in Michigan was suspended in 2012 after refusing to share her Facebook page with the school’s superintendent following complaints about a photo she had posted. Since then, dozens of similar cases prompted lawmakers to take action. More than 16 states have passed social media protections for individuals.

Ellen Sheng

It's not just workplaces, though. Schools are hotbeds for new surveillance technologies, as Benjamin Herold notes in an article for Education Week:

Social media monitoring companies track the posts of everyone in the areas surrounding schools, including adults. Other companies scan the private digital content of millions of students using district-issued computers and accounts. Those services are complemented with tip-reporting apps, facial-recognition software, and other new technology systems.

[...]

While schools are typically quiet about their monitoring of public social media posts, they generally disclose to students and parents when digital content created on district-issued devices and accounts will be monitored. Such surveillance is typically done in accordance with schools’ responsible-use policies, which students and parents must agree to in order to use districts’ devices, networks, and accounts.
Hypothetically, students and families can opt out of using that technology. But doing so would make participating in the educational life of most schools exceedingly difficult.

Benjamin Herold

In China, of course, a social credit system makes all of this a million times worse, but we in the West aren't heading in a great direction either.

We're entering a time where, by the time my children are my age, companies, employers, and the state could have decades of data from when they entered the school system through to them finding jobs, and becoming parents themselves.

There are upsides to all of this data, obviously. But I think that in the midst of privacy-focused conversations about Amazon's smart speakers and Google location-sharing, we might be missing the bigger picture around surveillance by educational institutions, employers, and governments.

Returning to Ian Welsh to finish up, remember that it's the coercive power relationships that make surveillance a bad thing:

Surveillance societies are sterile societies. Everyone does what they’re supposed to do all the time, and because we become what we do, it affects our personalities. It particularly affects our creativity, and is a large part of why Communist surveillance societies were less creative than the West, particularly as their police states ramped up.

Ian Welsh

We don't want to think about all of this, though, do we?


Also check out:

Only thoughts conceived while walking have any value

Philosopher and intrepid walker Friedrich Nietzsche is well known for today's quotation-as-title. Fellow philosopher Immanuel Kant was a keen walker, too, along with Henry David Thoreau. There's just something about big walks and big thoughts.

I spent a good part of yesterday walking about 30km because I woke wanting to see the sea. It has a calming effect on me, and my wife was at work with the car. Forty-thousand steps later, I'd not only succeeded in my mission and taken the photo that accompanies this post, but managed to think about all kinds of things that definitely wouldn't have entered my mind had I stayed at home.

I want to focus the majority of this article on a single piece of writing by Craig Mod, whose walk across Japan I followed by SMS. Instead of sharing the details of his 620 mile, six-week trek via social media, he instead updated a server which then sent text messages (with photographs, so technically MMS) to everyone who'd signed up to receive them. Readers could reply, but he didn't receive these until he'd finished the walk and they'd been automatically curated into a book and sent to him.

Writing in WIRED, Mod talks of his "glorious, almost-disconnected walk" which was part experiment, part protest:

I have configured servers, written code, built web pages, helped design products used by millions of people. I am firmly in the camp that believes technology is generally bending the world in a positive direction. Yet, for me, Twitter foments neurosis, Facebook sadness, Google News a sense of foreboding. Instagram turns me covetous. All of them make me want to do it—whatever “it” may be—for the likes, the comments. I can’t help but feel that I am the worst version of myself, being performative on a very short, very depressing timeline. A timeline of seconds.

[...]

So, a month ago, when I started walking, I decided to conduct an experiment. Maybe even a protest. I wanted to test hypotheses. Our smartphones are incredible machines, and to throw them away entirely feels foolhardy. The idea was not to totally disconnect, but to test rational, metered uses of technology. I wanted to experience the walk as the walk, in all of its inevitably boring walkiness. To bask in serendipitous surrealism, not just as steps between reloading my streams. I wanted to experience time.

Craig Mod

I love this, it's so inspiring. The most number of consecutive days I've walked is only two, so I can't even really imagine what it must be like to walk for weeks at a time. It's a form of meditation, I suppose, and a way to re-centre oneself.

The longness of an activity is important. Hours or even days don’t really cut it when it comes to long. “Long” begins with weeks. Weeks of day-after-day long walking days, 30- or 40-kilometer days. Days that leave you wilted and aware of all the neglect your joints and muscles have endured during the last decade of sedentary YouTubing.

[...]

In the context of a walk like this, “boredom” is a goal, the antipode of mindless connectivity, constant stimulation, anger and dissatisfaction. I put “boredom” in quotes because the boredom I’m talking about fosters a heightened sense of presence. To be “bored” is to be free of distraction.

Craig Mod

I find that when I walk for any period of time, certain songs start going through my head. Yesterday, for example, my brain put on repeat the song Good Enough by Dodgy from their album Free Peace Sweet. The time before it was We Can Do It from Jamiroquai's latest album Automaton. I'm not sure where it comes from, although the beat does have something to do with my pace.

Walking by oneself seems to do something to the human brain akin to unlocking the subconscious. That's why I'm not alone in calling it a 'meditative' activity. While I enjoy walking with others, the brain seems to start working a different way when you're by yourself being propelled by your own two legs.

It's easy to feel like we're not 'keeping up' with work, with family and friends, and with the news. The truth is, however, that the most important person to 'keep up' with is yourself. Having a strong sense of self, I believe, is the best way to live a life with meaning.

It might sound 'boring' to go for a long walk, but as Alain de Botton notes in The News: a user's manual, getting out of our routine is sometimes exactly what we need:

What we colloquially call 'feeling bored' is just the mind, acting out of a self-preserving reflex, ejecting information it has despaired of knowing where to place.

Alain de Botton

I'm not going to tell you what I thought about during my walk today as, outside of the rich (inner and outer) context in which the thinking took place, whatever I write would probably sound banal.

To me, however, the thoughts I had today will, like all of the thoughts I've had while doing some serious walking, help me organise my future actions. Perhaps that's what Nietzsche meant when he said that only thoughts conceived while walking have any value.


Also check out:

  • One step ahead: how walking opens new horizons (The Guardian) — "Walking provides just enough diversion to occupy the conscious mind, but sets our subconscious free to roam. Trivial thoughts mingle with important ones, memories sharpen, ideas and insights drift to the surface."
  • A Philosophy of Walking (Frédéric Gros) — "a bestseller in France, leading thinker Frédéric Gros charts the many different ways we get from A to B—the pilgrimage, the promenade, the protest march, the nature ramble—and reveals what they say about us."
  • What 10,000 Steps Will Really Get You (The Atlantic) — "While basic guidelines can be helpful when they’re accurate, human health is far too complicated to be reduced to a long chain of numerical imperatives. For some people, these rules can even do more harm than good."

What is no good for the hive is no good for the bee

So said Roman Emperor and Stoic philosopher Marcus Aurelius. In this article, I want to apply that to our use of technology as well as the stories we tell one another about that technology use.

Let's start with an excellent post by Nolan Lawson, who when I started using Twitter less actually deleted his account and went all-in on the Fediverse. He maintains a Mastodon web client called Pinafore, and is a clear-headed thinker on all things open. The post is called Tech veganism and sums up the problem I have with holier-than-thou open advocates:

I find that there’s a bit of a “let them eat cake” attitude among tech vegan boosters, because they often discount the sheer difficulty of all this stuff. (“Let them use Linux” could be a fitting refrain.) After all, they figured it out, so why can’t you? What, doesn’t everyone have a computer science degree and six years experience as a sysadmin?

To be a vegan, all you have to do is stop eating animal products. To be a tech vegan, you have to join an elite guild of tech wizards and master their secret arts. And even then, you’re probably sneaking a forbidden bite of Google or Apple every now and then.

Nolan Lawson

It's that second paragraph that's the killer for me. I'm pescetarian and probably about the equivalent of that, in Lawson's lingo, when it comes to my tech choices. I definitely agree with him that the conversation is already changing away from open source and free software to what Mark Zuckerberg (shudder) calls "time well spent":

I also suspect that tech veganism will begin to shift, if it hasn’t already. I think the focus will become less about open source vs closed source (the battle of the last decade) and more about digital well-being, especially in regards to privacy, addiction, and safety. So in this way, it may be less about switching from Windows to Linux and more about switching from Android to iOS, or from Facebook to more private channels like Discord and WhatsApp.

Nolan Lawson

This is reminiscent of Yancey Strickler's notion of 'dark forests'. I can definitely see more call for nuance around private and public spaces.

So much of this, though, depends on your worldview. Everyone likes the idea of 'freedom', but are we talking about 'freedom from' or 'freedom to'? How important are different types of freedom? Should all information be available to everyone? Where do rights start and responsibilities stop (and vice-versa)?

One thing I've found fascinating is how the world changes and debates get left behind. For example, the idea (and importance) of Linux on the desktop has been something that people have been discussing most of my adult life. At the same time, cloud computing has changed the game, with a lot of the data processing and heavy lifting being done by servers — most of which are powered by Linux!

Mark Shuttleworth, CEO of Canonical, the company behind Ubuntu Linux, said in a recent interview:

I think the bigger challenge has been that we haven't invented anything in the Linux that was like deeply, powerfully ahead of its time... if in the free software community we only allow ourselves to talk about things that look like something that already exists, then we're sort of defining ourselves as a series of forks and fragmentations.

Mark Shuttleworth

This is a problem that's wider than just software. Those of us who are left-leaning are more likely to let small ideological differences dilute our combined power. That affects everything from opposing Brexit, to getting people to switch to Linux. There's just too much noise, too many competing options.

Meanwhile, as the P2P Foundation notes, businesses swoop in and use open licenses to enclose the Commons:

[I]t is clear that these Commons have become an essential infrastructure without which the Internet could no longer function today (90% of the world’s servers run on Linux, 25% of websites use WordPress, etc.) But many of these projects suffer from maintenance and financing problems, because their development depends on communities whose means are unrelated to the size of the resources they make available to the whole world.

[...]

This situation corresponds to a form of tragedy of the Commons, but of a different nature from that which can strike material resources. Indeed, intangible resources, such as software or data, cannot by definition be over-exploited and they even increase in value as they are used more and more. But tragedy can strike the communities that participate in the development and maintenance of these digital commons. When the core of individual contributors shrinks and their strengths are exhausted, information resources lose quality and can eventually wither away.

P2P Foundation

So what should we do? One thing we've done with MoodleNet is to ensure that it has an AGPL license, one that Google really doesn't like. They state perfectly the reasons why we selected it:

The primary risk presented by AGPL is that any product or service that depends on AGPL-licensed code, or includes anything copied or derived from AGPL-licensed code, may be subject to the virality of the AGPL license. This viral effect requires that the complete corresponding source code of the product or service be released to the world under the AGPL license. This is triggered if the product or service can be accessed over a remote network interface, so it does not even require that the product or service is actually distributed.

Google

So, in other words, if you run a server with AGPL code, or create a project with source code derived from it, you must make that code available to others. To me, it has the same 'viral effect' as the Creative Commons BY-SA license.

As Benjamin "Mako" Hill points out in a recent keynote, we need to be a bit more wise when it comes to 'choosing a side'. Cory Doctorow, summarising Mako's keynote says:

[M]arkets discovered free software and turned it into "open source," figuring out how to create developer communities around software ("digital sharecropping") that lowered their costs and increased their quality. Then the companies used patents and DRM and restrictive terms of service to prevent users from having any freedom.

Mako says that this is usually termed "strategic openness," in which companies take a process that would, by default, be closed, and open the parts of it that make strategic sense for the firm. But really, this is "strategic closedness" -- projects that are born open are strategically enclosed by companies to allow them to harvest the bulk of the value created by these once-free systems.

[...]

Mako suggests that the time in which free software and open source could be uneasy bedfellows is over. Companies' perfection of digital sharecropping means that when they contribute to "free" projects, all the freedom will go to them, not the public.

Cory Doctorow

It's certainly an interesting time we live in, when the people who are pointing out all of the problems (the 'tech vegans') are seen as the problem, and the VC-backed companies as the disruptive champions of the people. Tech follows politics, though, I guess.


Also check out:

  • Is High Quality Software Worth the Cost? (Martin Fowler) — "I thus divide software quality attributes into external (such as the UI and defects) and internal (architecture). The distinction is that users and customers can see what makes a software product have high external quality, but cannot tell the difference between higher or lower internal quality."
  • What the internet knows about you (Axios) — "The big picture: Finding personal information online is relatively easy; removing all of it is nearly impossible."
  • Against Waldenponding II (ribbonfarm) — "Waldenponding is a search for meaning that is circumscribed by the what you might call the spiritual gravity field of an object or behavior held up as ineffably sacred. "

Friday fabrications

These things made me sit up and take notice:


Image via xkcd

Men fear wanderers for they have no rules

A few years ago, when I was at Mozilla, a colleague mentioned a series of books by Bernard Cornwell called The Last Kingdom. It seemed an obvious fit for me, he said, given that my interest in history and that I live in Northumberland. A couple of years later, I got around to reading the series, and loved it. The quote that serves as the title for this article is from the second book in the series: The Pale Horseman.

Another book I read that I wasn't expecting to enjoy was Ender's Game, a sci-fi novel by Orson Scott Card. I was looking for a quotation about Ender's access to networks when I came across this one from another one of the author's novels:

“Every person is defined by the communities she belongs to.”

Orson Scott Card

Some people say that you're the average of the five people with which you surround yourself. In this day and age, 'surrounding yourself' isn't necessarily a physical activity, it's to do with your interactions, however they occur.

It's easy to think about the time we spend at home with our nearest and dearest, but what about our networked interactions? For example, I've been playing a lot of Red Dead Redemption 2 with Dai Barnes recently, so that might count as an example — and so might the time we spend on Twitter, Instagram, and other social networks.

All of this brings us to an article I came across via Aaron Davis. Entitled The Dark Forest Theory of the Internet, Yancey Strickler explains how we're moving into a different era of interaction. He channels sci-fi author Liu Cixin:

Imagine a dark forest at night. It’s deathly quiet. Nothing moves. Nothing stirs. This could lead one to assume that the forest is devoid of life. But of course, it’s not. The dark forest is full of life. It’s quiet because night is when the predators come out. To survive, the animals stay silent.

[...]

Dark forests like newsletters and podcasts are growing areas of activity. As are other dark forests, like Slack channels, private Instagrams, invite-only message boards, text groups, Snapchat, WeChat, and on and on. This is where Facebook is pivoting with Groups (and trying to redefine what the word “privacy” means in the process).

These are all spaces where depressurized conversation is possible because of their non-indexed, non-optimized, and non-gamified environments. The cultures of those spaces have more in common with the physical world than the internet.

Yancey Strickler

What Strickler doesn't go into is the effect that this may have on western democracies. This is something, however, that is covered by an excellent book I read last week called The People vs Tech by Jamie Bartlett. The author explains how even mainstream social networks have become fragmented:

Over the last few years... the nature of political disagreement has changed. It's gone tribal. It is becoming hyper-partisan, characterised by fierce group loyalty that sometimes approaches leader workshop, a tendency to overlook one's own failing while exaggerating one's enemies and a dislike of compromise with opponents.

Jamie Bartlett

Bartlett cites the work of cyber-psychologist John Suler, who theorises about why people act differently online:

Suler argues that because we don't know or see the people we are speaking to (and they don't know or see us), because communication is instant, seemingly without rules or accountability, and because it all takes place in what feels like an alternative reality, we do things we wouldn't in real life. Suler calls this 'toxic disinhibition'. This is what all the articles about 'echo chambers' and 'filter bubbles' miss. The internet doesn't only create small tribes: it also gives easy access to enemy tribes. I see opposing views to mine online all the time; they rarely change my mind, and more often simply confirm my belief that I am the only sane person in a sea of internet idiots.

Jamie Bartlett

We're witnessing the breakdown of the attempt to create general-purpose social networks. Instead, just like the offline world, we'll end up with different spaces and areas for different purposes. Here's a Slack channel to talk with former colleagues; here's a Telegram group to talk with your family; here's a Twitter account to share blog posts with your followers.

I'm not so sure this is such a bad thing, to be honest. So long as those spaces aren't subject to the kind of dark advertising that's led to political havoc and ramifications over the last few years, I see it as a sort of rebalancing.


Also check out:

  • A parent's guide to raising a good digital citizen (Engadget) — "How do kids learn digital citizenship? The same way they learn how to be good citizens: They watch good role models, and they practice."
  • Can "Indie" Social Media Save Us? (The New Yorker) — "When you confine your online activities to so-called walled-garden networks, you end up using interfaces that benefit the owners of those networks."
  • I was wrong about networks (George Siemens) — "I'll hold to my mantra that it's networks all the way down. I need to add a critical caveat: all connections and networks occur within a system."

We give nothing so generously as our advice

Thanks François de La Rochefoucauld, but despite the above title coming from you (c.1678) , this post is actually inspired by Warren Ellis. I subscribe to many, many newsletters, and one of my favourites is Ellis' Orbital Operations, which goes out every Sunday.

Recently, Ellis talked about the development of his newsletter, over the course of a four-part 'blogchain'. I've been meaning to write up how Thought Shrapnel has evolved recently, so I'm going to use this as a prompt to do so.

Patreon page for Thought Shrapnel
Patreon page for Thought Shrapnel

First up, Thought Shrapnel is now primarily a website with an email roundup. It's not any more, strictly speaking, a 'newsletter'. There's around 1,500 people who subscribe to the email that I send out every Sunday, and 56 of those support its continued existence via Patreon.

This site uses WordPress with a number of plugins. I host it via a Digital Ocean droplet and pay for Jetpack to get automatic daily backups and better statistics. I schedule posts every weekday which are immediately accessible to supporters, and then available on the open web a week later.

Here's three plugins that really help with my new workflow:

  • Add widget after comment — allows me to add automatically after a post anything I'd usually add to a sidebar. I use it to encourage people to become supporters.
  • MailPoet — I use this to automatically send out each post to supporters and to curate the weekly round-up to both supporters and subscribers.
  • tao-schedule-update — means I can schedule updates to already published posts, changing categories, visibility, etc.

Over the years, I've experimented with Instapaper, social bookmarking sites, Evernote, and all sorts of things for saving things over the years. Right now, I'm using Pocket to rediscover things I come across that I'd like to read later. That means that when I sit down to write, I find something interesting and then look for something else I could link it with. Eventually, I come up with six links that in some way go together and then I write something based on those.

In terms of the title for my articles, I've started using quotations. These tend to come from Kindle highlights or dead-tree books I've read. Sometimes they just come from Goodreads. Either way, I've got a bunch of drafts with just the title and the attribution ready to go.

Images to accompany articles used to come almost exclusively from Unsplash, but I've recently added Pixabay into the mix to add a bit of variety. Neither sites require attribution.

Chart showing visitors to thoughtshrapnel.com during May 2019

It's interesting to me to see the cadence of visitors to Thought Shrapnel over the course of a week. It's pretty obvious to see which day is Sunday, as that's when I send out the round-up email!

What I really like about my current setup is that everything is now controlled by me. I spend about £10/month on Digital Ocean, Jetpack is £33/year, and MailPoet is free up to 2,000 users. The domain name is about £16/year. All in all, for about £15/month I've got a secure, fast-loading site of which I'm in complete control.

Some people use the idea of a Commonplace book to describe what they do. Warren Ellis talks of a 'Republic of Newsletters' to evoke a modern-day equivalent of the so-called Republic of Letters amongst the 17th and 18th century intellectual community. Me? I'm just happy to create something that I enjoy writing and from which other people seem to gain value!


PS for those wondering, the excellent Thought Shrapnel logo is courtesy of Bryan Mathers and is available as a sticker for $3/month supporters!

Man must choose whether to be rich in things or in the freedom to use them

So said Ivan Illich. Another person I can imagine saying that is Diogenes the Cynic, perhaps my favourite philosopher of all time. He famously lived in a large barrel, sometimes pretended he was a dog, and allegedly told Alexander the Great to stand out of his sunlight.

What a guy. The thing that Diogenes understood is that freedom is much more important than power. That's the subject of a New York Times Op-Ed by essayist and cartoonist Tim Kreiger, who explains:

I would define power as the ability to make other people do what you want; freedom is the ability to do what you want. Like gravity and acceleration, these are two forces that appear to be different but are in fact one. Freedom is the defensive, or pre-emptive, form of power: the power that’s necessary to resist all the power the world attempts to exert over us from day one. So immense and pervasive is this force that it takes a considerable counterforce just to restore and maintain mere autonomy. Who was ultimately more powerful: the conqueror Alexander, who ruled the known world, or the philosopher Diogenes, whom Alexander could neither offer nor threaten with anything? (Alexander reportedly said that if he weren’t Alexander, he would want to be Diogenes. Diogenes said that if he weren’t Diogenes, he’d want to be Diogenes too.)

Tim Kreider

Of course, Tim is a privileged white dude, just like me. His opinion piece does, however, give us an interesting way into the cultural phenomenon of young white men opting out of regular employment.

As Andrew Fiouzi writes for Mel Magazine, the gap between what you're told (and what you see your older relatives achieving) and what you're offered can sometimes be stark. Michael Madowitz, an economist at the Center for American Progress, is cited by Fiouzi in the article.

While there’s a lot of speculation as to why this is the case, Madowitz says it has little to do with the common narrative that millennial men are too busy playing video games. Instead, he argues that millennials... who entered the labor market at a time when it was less likely than ever to adequately reward them for their work — “I couldn’t get any interviews and I tried doing some freelance stuff, but I could barely find anything, so I took an unpaid internship at a design agency,” says [one example] — were simply less likely to feel the upside of working.

Andrew Fiouzi

By default in our western culture, no matter how much a man earns, if he's in a hetrosexual relationship, then it's the woman who becomes the care-giver after they have children. I think that's changing a bit, and men are more likely to at least share the responsibilities.

So in the end, it may be the very inflexibility of an economy built on traditional gender roles that ultimately brings down the male-dominated labor apparatus, one stay-at-home dad at a time.

ANDREW FIOUZI

Part of the problem, I think, is the constant advice to 'follow your heart' and find work that's 'your passion'. While I think you absolutely should be guided by your values, how that plays out depends a lot on context.

Pavithra Mohan takes this up in an article for Fast Company. She writes:

Sometimes, compensation or job function may be more important to you than meaning, while at other times location and flexibility may take precedence. 

[...]

Something that can get lost in the conversation around meaningful work is that even pursuing it takes privilege.

[...]

Making an impact can also mean very different things to different people. If you feel fulfilled by your family or social life, for example, being connected to your work may not—and need not—be of utmost importance. You might find more meaning in volunteer work or believe you can make more of an impact by practicing effective altruism and putting the money you earn towards charitable causes. 

Pavithra Mohan

I've certainly been thinking about that this Bank Holiday weekend. What gets squeezed out in your personal life, when you're busy trying to find the perfect 'work' life? Or, to return to a question that Jocelyn K. Glei asks, who are you without the doing?


Also check out:

  • Is pleasure all that is good about experience? (Journal of Philosophical Studies) — "In this article I present the claim that hedonism is not the most plausible experientialist account of wellbeing. The value of experience should not be understood as being limited to pleasure, and as such, the most plausible experientialist account of wellbeing is pluralistic, not hedonistic."
  • Strong Opinions Loosely Held Might be the Worst Idea in Tech (The Glowforge Blog) — "What really happens? The loudest, most bombastic engineer states their case with certainty, and that shuts down discussion. Other people either assume the loudmouth knows best, or don’t want to stick out their neck and risk criticism and shame. This is especially true if the loudmouth is senior, or there is any other power differential."
  • Why Play a Music CD? ‘No Ads, No Privacy Terrors, No Algorithms’ (The New York Times) — "What formerly hyped, supposedly essential technology has since been exposed for gross privacy violations, or for how easily it has become a tool for predatory disinformation?"

We never look at just one thing; we are always looking at the relation between things and ourselves

Today's title comes from John Berger's Ways of Seeing, which is an incredible book. Soon after the above quotation, he continues,

The eye of the other combines with our own eye to make it fully credible that we are part of the visible world.

John Berger

That period of time when you come to be you is really interesting. As an adolescent, and before films like The Matrix, I can remember thinking that the world literally revolved around me; that other people were testing me in some way. I hope that's kind of normal, and I'd add somewhat hastily that I grew out of that way of thinking a long time ago. Obviously.

All of this is a roundabout way of saying that we cannot know the 'inner lives' of other people, or in fact that they have them. Writing in The Guardian, psychologist Oliver Burkeman notes that we sail through life assuming that we experience everything similarly, when that's not true at all:

A new study on a technical-sounding topic – “genetic variation across the human olfactory receptor repertoire” – is a reminder that we smell the world differently... Researchers found that a single genetic mutation accounts for many of those differences: the way beetroot smells (and tastes) like disgustingly dirty soil to some people, or how others can’t detect the smokiness of whisky, or smell lily of the valley in perfumes.

Oliver Burkeman

I know that my wife sees colours differently to me, as purple is one of her favourite colours. Neither of us is colour-blind, but some things she calls 'purple' are in no way 'purple' to me.

So when it comes to giving one another feedback, where should we even begin? How can we know the intentions or the thought processes behind someone's actions? In an article for Harvard Business Review, Marcus Buckingham and Ashley Goodall explain that our theories about feedback are based on three theories:

  1. Other people are more aware than you are of your weaknesses
  2. You lack certain abilities you need to acquire, so your colleagues should teach them to you
  3. Great performance is universal, analyzable, and describable, and that once defined, it can be transferred from one person to another, regardless of who each individual is

All of these, the author's claim, are false:

What the research has revealed is that we’re all color-blind when it comes to abstract attributes, such as strategic thinking, potential, and political savvy. Our inability to rate others on them is predictable and explainable—it is systematic. We cannot remove the error by adding more data inputs and averaging them out, and doing that actually makes the error bigger.

Buckingham & Goodall

What I liked was their actionable advice about how to help colleagues thrive, captured in this table:

The Right Way to Help Colleague Excel
Taken from 'The Feedback Fallacy' by Marcus Buckingham and Ashley Goodall

Finally, as an educator and parent, I've noticed that human learning doesn't follow a linear trajectory. Anything but, in fact. Yet we talk and interact as though it does. That's why I found Good Things By Their Nature Are Fragile by Jason Kottke so interesting, quoting a 2005 post from Michael Barrish. I'm going to quote the same section as Kottke:

In 1988 Laura and I created a three-stage model of what we called “living process.” We called the three stages Good Thing, Rut, and Transition. As we saw it, Good Thing becomes Rut, Rut becomes Transition, and Transition becomes Good Thing. It’s a continuous circuit.

A Good Thing never leads directly to a Transition, in large part because it has no reason to. A Good Thing wants to remain a Good Thing, and this is precisely why it becomes a Rut. Ruts, on the other hand, want desperately to change into something else.

Transitions can be indistinguishable from Ruts. The only important difference is that new events can occur during Transitions, whereas Ruts, by definition, consist of the same thing happening over and over.

Michael Barrish

In life, sometimes we don't even know what stage we're in, never mind other people. So let's cut one another some slack, dispel the three myths about feedback listed above, and allow people to be different to us in diverse and glorious ways.


Also check out:

  • Iris Murdoch, The Art of Fiction No. 117 (The Paris Review) — "I would abominate the idea of putting real people into a novel, not only because I think it’s morally questionable, but also because I think it would be terribly dull."
  • How an 18th-Century Philosopher Helped Solve My Midlife Crisis (The Atlantic) — "I had found my salvation in the sheer endless curiosity of the human mind—and the sheer endless variety of human experience."
  • A brief history of almost everything in five minutes (Aeon) —According to [the artist], the piece ‘is intended for both introspection and self-reflection, as a mirror to ourselves, our own mind and how we make sense of what we see; and also as a window into the mind of the machine, as it tries to make sense of its observations and memories’.

Header image: webcomicname.com

Friday fumblings

These were the things I came across this week that made me smile:


Image via Why WhatsApp Will Never Be Secure (Pavel Durov)

One can see only what one has already seen

Fernando Pessoa with today's quotation-as-title. He's best known for The Book of Disquiet which he called "a factless autobiography". It's... odd. Here's a sample:

Whether or not they exist, we're slaves to the gods.

Fernando pessoa

I've been reading a lot of Seneca recently, who famously said:

Life is divided into three periods, past, present and future. Of these, the present is short, the future is doubtful, the past is certain.

Seneca

The trouble is, we try and predict the future in order to control the future. Some people have a good track record in this, partly because they are involved in shaping things in the present. Other people have a vested interest in trying to get the world to bend to their ideology.

In an article for WIRED, Joi Ito, Director of the MIT Media Lab writes about 'extended intelligence' being the future rather than AI:

The notion of singularity – which includes the idea that AI will supercede humans with its exponential growth, making everything we humans have done and will do insignificant – is a religion created mostly by people who have designed and successfully deployed computation to solve problems previously considered impossibly complex for machines.

Joi Ito

It's a useful counter-balance to those banging the AI drum and talking about the coming jobs apocalypse.

After talking about 'S curves' and adaptive systems, Ito explains that:

Instead of thinking about machine intelligence in terms of humans vs machines, we should consider the system that integrates humans and machines – not artificial intelligence but extended intelligence. Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.

Joi Ito

I haven't had a chance to read it yet, but I'm looking forward to seeing some of the ideas put forward in The Weight of Light: a collection of solar futures (which is free to download in multiple formats). We need to stop listening solely to rich white guys proclaiming the Silicon Valley narrative of 'disruption'. There are many other, much more collaborative and egalitarian, ways of thinking about and designing for the future.

This collection was inspired by a simple question: what would a world powered entirely by solar energy look like? In part, this question is about the materiality of solar energy—about where people will choose to put all the solar panels needed to power the global economy. It’s also about how people will rearrange their lives, values, relationships, markets, and politics around photovoltaic technologies. The political theorist and historian Timothy Mitchell argues that our current societies are carbon democracies, societies wrapped around the technologies, systems, and logics of oil.What will it be like, instead, to live in the photon societies of the future?

The Weight of Light: a collection of solar futures

We create the future, it doesn't just happen to us. My concern is that we don't recognise the signs that we're in the last days. Someone shared this quotation from the philosopher Kierkegaard recently, and I think it describes where we're at pretty well:

A fire broke out backstage in a theatre. The clown came out to warn the public; they thought it was a joke and applauded. He repeated it; the acclaim was even greater. I think that's just how the world will come to an end: to general applause from wits who believe it's a joke.

Søren Kierkegaard

Let's home we collectively wake up before it's too late.


Also check out:

  • Are we on the road to civilisation collapse? (BBC Future) — "Collapse is often quick and greatness provides no immunity. The Roman Empire covered 4.4 million sq km (1.9 million sq miles) in 390. Five years later, it had plummeted to 2 million sq km (770,000 sq miles). By 476, the empire’s reach was zero."
  • Fish farming could be the center of a future food system (Fast Company) — "Aquaculture has been shown to have 10% of the greenhouse gas emissions of beef when it’s done well, and 50% of the feed usage per unit of production as beef"
  • The global internet is disintegrating. What comes next? (BBC FutureNow) — "A separate internet for some, Facebook-mediated sovereignty for others: whether the information borders are drawn up by individual countries, coalitions, or global internet platforms, one thing is clear – the open internet that its early creators dreamed of is already gone."

Everything that needs to be said has already been said. But since no one was listening, everything must be said again

Today's title comes courtesy of Nobel prize winner André Gide. For those with children reading this, you've probably got a wry smile on your face. Yep, today's article is all about parenting.

I'd like to start with a couple of Lifehacker interviews: one with Mike Adamick, author of Raising Empowered Daughters, and the other is with Austin Kleon, best known for Steal Like An Artist. Adamick makes a really important point for those of us with daughters:

Kids, and I think especially girls, are expected to be these perfect little achievers as they get older. Good grades, good at sports, good friends. There’s so much pressure and I wanted her to know, and I think I make a compelling example, that everyone messes up all the time and it’s okay.

Mike Adamick

Towards the end of the interview, Adamick goes on to say:

You get to define what your circles look like, and you can do tremendous good in your social, work, and family circles by playing a more active role in helping our girls not have to navigate a sexist society and by helping our boys to access their full emotional selves, not just a one-size-fits-all masculinity that can so easily slide into anger and entitlement. We’re all in this together, and we have a lot more power than we imagine we do.

Mike Adamick

It's hard to realise, as a straight white man that, despite your best intentions, you're actually part of the problem, part of the patriarchy. All you can really do is go out of your way to try and square things up through actions, not just words. And that includes in your role as son and husband as much as parent.

Austin Kleon, being an author and artist, frames things in terms of children and his work. This image he shares (which I've included as the header for this article) absolutely slayed me. Although I try to explain to my own children what I'm doing when I'm using my laptop, I'm pretty sure they just see the very different things I'm doing as just 'being on the computer'.

He gives the kind of advice that I sometimes give to soon-to-be fathers:

During a birthing class, my father-in-law, who was a veteran parent at that point, was asked if he had any advice for the rookie parents. He stood up and said, “You’re going to want to throw them out the window. And that’s okay! The important thing is that you don’t.”

Austin Kleon

Parenting is the hardest, but probably most rewarding, job in the world. You always feel like you could be doing better, and that you could be providing more for your offspring. The truth is, though, that they actually need to see you as a human being, as someone who experiences the ups and downs of life. The vicissitudes of emotional experience are what makes us human — and, perhaps most importantly, our children learn from us how to deal with that rollercoaster.


Also check out:

  • You Don't Have to Define What Type of Parent You Are (Offspring) — "My standards vary based on the day of the week, the direction of the wind and my general mood. I have absolutely no idea what kind of parent I am other than hopefully a decent one."
  • Parents: let your kids fail. You’ll be doing them a favor (Quartz) — "The dirty secret of parenting is that kids can do more than we think they can, and it’s up to us to figure that out."
  • Parents Shouldn’t Spy on Their Kids (Nautilus) — "Adolescence is a critical time in kids’ lives, when they need privacy and a sense of individual space to develop their own identities. It can be almost unbearable for parents to watch their children pull away. But as tempting as it may be for parents to infiltrate the dark corners of their children’s personal lives, there’s good evidence that snooping does more harm than good."

Everyone hustles his life along, and is troubled by a longing for the future and weariness of the present

Thanks to Seneca for today's quotation, taken from his still-all-too-relevant On the Shortness of Life. We're constantly being told that we need to 'hustle' to make it in today's society. However, as Dan Lyons points out in a book I'm currently reading called Lab Rats: how Silicon Valley made work miserable for the rest of uswe're actually being 'immiserated' for the benefit of Venture Capitalists. 

As anyone who's read Daniel Kahneman's book Thinking, Fast and Slow will know, there are two dominant types of thinking:

The central thesis is a dichotomy between two modes of thought: "System 1" is fast, instinctive and emotional; "System 2" is slower, more deliberative, and more logical. The book delineates cognitive biases associated with each type of thinking, starting with Kahneman's own research on loss aversion. From framing choices to people's tendency to replace a difficult question with one which is easy to answer, the book highlights several decades of academic research to suggest that people place too much confidence in human judgement.

WIkipedia

Cal Newport, in a book of the same name, calls 'System 2' something else: Deep Work. Seneca, Kahneman, and Newport, are all basically saying the same thing but with different emphasis. We need to allow ourselves time for the slower and deliberative work that makes us uniquely human.

That kind of work doesn't happen when you're being constantly interrupted, nor when you're in an environment that isn't comfortable, nor when you're fearful that your job may not exist next week. A post for the Nuclino blog entitled Slack Is Not Where 'Deep Work' Happens uses a potentially-apocryphal tale to illustrate the point:

On one morning in 1797, the English poet Samuel Taylor Coleridge was composing his famous poem Kubla Khan, which came to him in an opium-induced dream the night before. Upon waking, he set about writing until he was interrupted by an unknown person from Porlock. The interruption caused him to forget the rest of the lines, and Kubla Khan, only 54 lines long, was never completed.

Nuclino blog

What we're actually doing by forcing everyone to use synchronous tools like Slack is a form of journalistic rhythm — but without everyone being synced-up:

Diagram courtesy of the Nuclino blog

If you haven't read Deep Work, never fear, because there's an epic article by Fadeke Adegbuyi for doist entitled The Complete Guide to Deep Work which is particularly useful:

This is an actionable guide based directly on Newport’s strategies in Deep Work. While we fully recommend reading the book in its entirety, this guide distills all of the research and recommendations into a single actionable resource that you can reference again and again as you build your deep work practice. You’ll learn how to integrate deep work into your life in order to execute at a higher level and discover the rewards that come with regularly losing yourself in meaningful work.

Fadeke Adegbuyi

Lots of articles and podcast episodes say they're 'actionable' or provide 'tactics' for success. I have to say this one delivers. I'd still read Newport's book, though.

Interestingly, despite all of the ridiculousness spouted by VC's, people are pretty clear about how they can do their best work. After a Dropbox survey of 500 US-based workers in the knowledge economy, Ben Taylor outlines four 'lessons' they've learned:

  1. More workers want to slow down to get things right — "In reality, 61% of workers said they wanted to “slow down to get things right” while only 41%* wanted to “go fast to achieve more.” The divide was even starker among older workers."
  2. Workers strongly value uninterrupted focus at work, but most will make an exception to help others — "The results suggest we need to be more thoughtful about when we break our concentration, or ask others to do so. When people know they are helping others in a meaningful way, they tend to be okay with some distraction. But the busywork of meetings, alerts, and emails can quickly disrupt a person’s flow—one of the most important values we polled."
  3. Most workers have slightly more trust in people closest to the work, rather than people in upper management — "Among all respondents, 53% trusted people “closest to the work,” while only 45% trusted “upper management.” You might assume that younger workers would be the most likely to trust peers over management, but in fact, the opposite was true."
  4. Workers are torn between idealism and pragmatism — "It’s tempting to assume that addressing just one piece—like taking a stand on societal issues—will necessarily get in the way of the work itself. But our research suggests we can begin to solve the two in tandem, as more equality, inclusion, and diversity tends to come hand-in-hand with a healthier mindset about work."

I think we need to reclaim workplace culture from the hustlers, shallow thinkers, and those focused on short-term profit. Let's reflect on how things actually work in practice. As Nassim Nicholas Taleb says about being 'antifragile', let's "look for habits and rules that have been around for a long time".


Also check out:

  • Health effects of job insecurity (IZA) — "Workers’ health is not just a matter for employees and employers, but also for public policy. Governments should count the health cost of restrictive policies that generate unemployment and insecurity, while promoting employability through skills training."
  • Will your organization change itself to death? (opensource.com) — "Sometimes, an organization returns to the same state after sensing a stimulus. Think about a kid's balancing doll: You can push it and it'll wobble around, but it always returns to its upright state... Resilient organizations undergo change, but they do so in the service of maintaining equilibrium."
  • Your Brain Can Only Take So Much Focus (HBR) — "The problem is that excessive focus exhausts the focus circuits in your brain. It can drain your energy and make you lose self-control. This energy drain can also make you more impulsive and less helpful. As a result, decisions are poorly thought-out, and you become less collaborative."

Idleness always produces fickle changes of mind

If you've never read Michel de Montaigne's Essays then you're missing a treat. He's thought of as the prototypical 'blogger' and most of what he's written has survived the vicissitudes of changes in opinion over the last 450 years. The quotation for today's article comes from him.

As Austin Kleon notes in the post that accompanies the image that also illustrates this post, idleness is not the same as laziness:

I’m... a practitioner of intentional idleness: blocking off time in which I can do absolutely nothing. (Like Terry Gilliam, I would like to be known as an “Arch Idler.”) “Creative people need time to just sit around and do nothing,” I wrote in Steal Like An Artist.  (See Jenny Odell’s How To Do Nothing, Robert Louis Stevenson’s An Apology for Idlers, Tom Hodgkinson’s “The Idle Parent,” Tim Kreider’s “The ‘Busy’ Trap,” etc. )

Austin Kleon

There's a great post on The Art of Manliness by Brett and Kate McKay about practising productive procrastination, and how positive it can be. They break down the types of tasks that we perform on an average down into three groups:

Tier 1: tasks that are the most cognitively demanding — hard decisions, challenging writing, boring reading, tough analysis, etc.

Tier 2: tasks that take effort, but not as much — administrative work, making appointments, answering emails, etc.

Tier 3: tasks that still require a bit of effort, but in terms of cognitive load are nearly mindless — cleaning, organizing, filing, paying bills, etc.

Brett and Kate McKay

As I've said many times before, I can only really do four hours of really deep work (the 'Tier 1' tasks) per day. Of course, the demands of any job and most life admin, mostly form into Tier 2, with a bit of Tier 3 for good measure.

The thrust of their mantra to 'practise productive procrastination' is that, if you're not feeling up to a Tier 1 task, you should do a Tier 2 or Tier 3 task. Apparently, and I have to say I'm obviously not their target audience here, most people instead of doing a Tier 1 task instead do nothing useful and instead do things like checking Facebook, gossiping, and playing games.

The trouble is that with new workplace tools we can almost be encouraged into low-level tasks, as an article by Rani Molla for Recode explains:

On average, employees at large companies are each sending more than 200 Slack messages per week, according to Time Is Ltd., a productivity-analytics company that taps into workplace programs — including Slack, calendar apps, and the Office Suite — in order to give companies recommendations on how to be more productive. Power users sending out more than 1,000 messages per day are “not an exception.”

Keeping up with these conversations can seem like a full-time job. After a while, the software goes from helping you work to making it impossible to get work done.

Rani Molla

Constant interruptions aren't good for deep work, nor are open plan offices. However, I remember working in an office that had both. There was a self-policed time shortly after lunch (never officially sanctioned or promoted) where, for an hour or two, people really got 'in the zone'. It was great.

What we need, is a way to block out our calendars for unstructured, but deep work, and be trusted to do so. I actually think that most workplaces and most bosses would actually be OK with this. Perhaps we just need to get on with it?


Also check out:

Friday finds

Check out these links that I came across this week and thought you'd find interesting:

  • Netflix Saves Our Kids From Up To 400 Hours of Commercials a Year (Local Babysitter) — "We calculated a series of numbers related to standard television homes, compared them to Netflix-only homes and found an interesting trend with regard to how many commercials a streaming-only household can save their children from having to watch."
  • The Emotional Charge of What We Throw Away (Kottke.org) — "consumers actually care more about how their stuff is discarded, than how it is manufactured"
  • Sidewalk Labs' street signs alert people to data collection in use (Engadget) — "The idea behind Sidewalk Labs' icons is pretty simple. The company wants to create an image-based language that can quickly convey information to people the same way that street and traffic signs do. Icons on the signs would show if cameras or other devices are capturing video, images, audio or other information."
  • The vision of the home as a tranquil respite from labour is a patriarchal fantasy (Dezeen) — "[F]or a growing number of critics, the nuclear house is a deterministic form of architecture which stifles individual and collective potential. Designed to enforce a particular social structure, nuclear housing hardwires divisions in labour, gender and class into the built fabric of our cities. Is there now a case for architects to take a stand against nuclear housing?
  • The Anarchists Who Took the Commuter Train (Longreads) — "In the twenty-first century, the word “anarchism” evokes images of masked antifa facing off against neo-Nazis. What it meant in the early twentieth century was different, and not easily defined. "

Image from These gorgeous tiny houses can operate entirely off the grid (Fast Company)

The school system is a modern phenomenon, as is the childhood it produces

Good old Ivan Illich with today's quotation-as-title. If you haven't read his Deschooling Society yet, you must. Given actions speak louder than words, it really makes you think about what we're actually doing to children when we send them off to the world of formal education.

The pupil is thereby "schooled" to confuse teaching with learning, grade advancement with education, a diploma with competence, and fluency with the ability to say something new.

ivan illich

I left teaching almost a decade ago and still have a strong connection to the classroom through my wife (who's a teacher), my children (who are at school) and my friends/network (many of whom are involved in formal education.

That's why a post entitled The Absurd Structure of High School by Bernie Bleske resonated with me, even though it's based on his experience in the US:

The system’s scheduling fails on every possible level. If the goal is productivity, the fractured nature of the tasks undermines efficient product. So much time is spent in transition that very little is accomplished before there is a demand to move on. If the goal is maximum content conveyed, then the system works marginally well, in that students are pretty much bombarded with detail throughout their school day. However, that breadth of content comes at the cost of depth of understanding. The fractured nature of the work, the short amount of time provided, and the speed of change all undermine learning beyond the superficial. It’s shocking, really, that students learn as much as they do.

Bernie Bleske

We've known for a long time now, that a 'stage, not age' approach is much better for everyone involved. My daughter, sadly, enjoys school but is pretty bored there. And, frustratingly, there's not much we as parents can do about it.

If you've got an academically-able child, on the surface it seems like part of the problem is them being 'held back' by their peers. However, studies show that there's little empirical evidence for this being true — as Oscar Hedstrom points out in Why streaming kids according to ability is a terrible idea:

Despite all this, there is limited empirical evidence to suggest that streaming results in better outcomes for students. Professor John Hattie, director of the Melbourne Education Research Institute, notes that ‘tracking has minimal effects on learning outcomes and profound negative equity effects’. Streaming significantly – and negatively – affects those students placed in the bottom sets. These students tend to have much higher representation of low socioeconomic backgrounds. Less significant is the small benefit for those lucky clever students in the higher sets. The overall result is relative inequality. The smart stay smart, and the dumb get dumber, further entrenching social disadvantage.

Oscar Hedstrom

I worked in a school in a rough area that streamed kids based on the results of a 'literacy skills' test on entry. The result was actually middle-class segregation within the school. As a child myself, I also went to a pretty tough school in an ex-mining town, which was a bit more integrated.

The trouble with all of this is that most of the learning that happens in school is inside some form of classroom. As a recent Innovation Unit report entitled Local Learning Ecosystems: emerging models discusses, 'learning ecosystem' is a bit of a buzz-term at the moment, but with potentially useful applications:

It remains to be seen whether the education ecosystem idea, as expressed in these varieties, will evolve as a truly significant new driver in public education on a large scale. These initiatives reflect ambitious visions well beyond current achievements. Conventional systems, with their excessive assessment routines, pressurized school communities, and entrenched vestigial approaches, are difficult to shift. But this report offers a taste of the creative flourishing in education thinking today that has emerged against, and perhaps in response to, the erosion of resources for public education, often abetted by indifferent, even hostile government.

Local Learning Ecosystems: emerging models

My go-to book around all of this is still Prof. Keri Facer's excellent Learning Futures: education, technology and social change. I still haven't come across another book with such a hopeful, practical vision for the future since reading it when it came out in 2011.

Hopefully, taking a learning ecosystem or 'ecology' approach will provide the necessary shift of perspective to move us to the world beyond (just) classrooms.


Also check out:

Form is the possibility of structure

The philosopher Ludwig Wittgenstein with today's quotation-as-title. I'm using it as a way in to discuss some things around city planning, and in particular an article I've been meaning to discuss for what seems like ages.

In an article for The LA Times, Jessica Roy highlights a phenomenon I wish I could take back and show my 12 year-old self:

Thirty years ago, Maxis released “SimCity” for Mac and Amiga. It was succeeded by “SimCity 2000” in 1993, “SimCity 3000” in 1999, “SimCity 4” in 2003, a version for the Nintendo DS in 2007, “SimCity: BuildIt” in 2013 and an app launched in 2014.

Along the way, the games have introduced millions of players to the joys and frustrations of zoning, street grids and infrastructure funding — and influenced a generation of people who plan cities for a living. For many urban and transit planners, architects, government officials and activists, “SimCity” was their first taste of running a city. It was the first time they realized that neighborhoods, towns and cities were things that were planned, and that it was someone's job to decide where streets, schools, bus stops and stores were supposed to go.

Jessica Roy

Some games are just awesome. SimCity is still popular now on touchscreen devices, and my kids play it occasionally. It's interesting to read in the article how different people, now responsible for real cities, played the game, for example Roy quotes the Vice President of Transportation and Housing at the non-profit Silicon Valley Leadership Group

"I was not one of the players who enjoyed Godzilla running through your city and destroying it. I enjoyed making my city run well."

Jason Baker

I, on the other hand, particularly enjoyed booting up 'scenario mode' where you had to rescue a city that had been ravaged by Godzilla, aliens, or a natural disaster.

This isn't an article about nostalgia, though, and if you read the article in more depth you realise that it's an interesting insight into our psychology around governance of cities and nations. For example, going back to an article from 2018 that also references SimCity, Devon Zuegel writes:

The way we live is shaped by our infrastructure — the public spaces, building codes, and utilities that serve a city or region. It can act as the foundation for thriving communities, but it can also establish unhealthy patterns when designed poorly.

[...]

People choose to drive despite its costs because they lack reasonable alternatives. Unfortunately, this isn’t an accident of history. Our transportation system has been overly focused on automobile traffic flow as its metric of success. This single-minded focus has come at the cost of infrastructure that supports alternative ways to travel. Traffic flow should, instead, be one goal out of many. Communities would be far healthier if our infrastructure actively encouraged walking, cycling, and other forms of transportation rather than subsidizing driving and ignoring alternatives.

Devon Zuegel

In other words, the decisions we ask our representatives to make have a material impact in shaping our environment. That, in turn, affects our decisions about how to live and work.

When we don't have data about what people actually do, it's easy for ideology and opinions to get in the way. That's why I'm interested in what Los Angeles is doing with its public transport system. As reported by Adam Rogers in WIRED, the city is using mobile phone data to see how it can 'reboot' its bus system. It turns out that the people running the system had completely the wrong assumptions:

In fact, Metro's whole approach turned out to be skewed to the wrong kinds of trips. “Traditionally we're trying to provide fast service for long-distance trips,” [Anurag Komanduri, a data anlyst] says. That's something the Orange Line and trains are good at. But the cell phone data showed that only 16 percent of trips in LA County were longer than 10 miles. Two-thirds of all travel was less than five miles. Short hops, not long hauls, rule the roads.

Adam Rogers

There's some discussion later in the article about the "baller move" of ripping down some of the freeways to force people to use public transportation. Perhaps that's actually what's required.

In Barcelona, for example, "fiery leftist housing activist" Ada Colau became the city's mayor in 2015. Since then, they've been doing some radical experimentation. David Roberts reports for Vox on what they've done with one area of the city that I've actually seen with my own eyes:

Inside the superblock in the Poblenou neighborhood, in the middle of what used to be an intersection, there’s a small playground, with a set of about a dozen picnic tables next to it, just outside a local cafe. On an early October evening, neighbors sit and sip drinks to the sound of children’s shouts and laughter. The sun is still out, and the warm air smells of wild grasses growing in the fresh plantings nearby.

David Roberts

I can highly recommended watching this five-minute video overview of the benefits of this approach:

[www.youtube.com/watch](https://www.youtube.com/watch?v=ZORzsubQA_M)

So if it work, why aren't we seeing more of this? Perhaps it's because, as Simon Wren-Lewis points out on his blog, most of us are governed by incompetents:

An ideology is a collection of ideas that can form a political imperative that overrides evidence. Indeed most right wing think tanks are designed to turn the ideology of neoliberalism into policy based evidence. It was this ideology that led to austerity, the failed health reforms and the privatisation of the probation service. It also played a role in Brexit, with many of its protagonists dreaming of a UK free from regulations on workers rights and the environment. It is why most of the recent examples of incompetence come from the political right.

A pluralist democracy has checks and balances in part to guard against incompetence by a government or ministers. That is one reason why Trump and the Brexiters so often attack elements of a pluralist democracy. The ultimate check on incompetence should be democracy itself: incompetent politicians are thrown out. But when a large part of the media encourage rather than expose acts of incompetence, and the non-partisan media treat knowledge as just another opinion, that safegurd against persistent incompetence is put in danger.

Simon Wren-Lewis

We seem to have started with SimCity and ended with Trump and Brexit. Sorry about that, but without decent government, we can't hope to improve our communities and environment.


Also check out:

  • ‘Nation as a service’ is the ultimate goal for digitized governments (TNW) — "Right now in Estonia, when you have a baby, you automatically get child benefits. The user doesn’t have to do anything because the government already has all the data to make sure the citizen receives the benefits they’re entitled to."
  • The ethics of smart cities (RTE) — "With ethics-washing, a performative ethics is being practised designed to give the impression that an issue is being taken seriously and meaningful action is occurring, when the real ambition is to avoid formal regulation and legal mechanisms."
  • Cities as learning platforms (Harold Jarche) — "For the past century we have compartmentalized the life of the citizen. At work, the citizen is an ‘employee’. Outside the office he may be a ‘consumer’. Sometimes she is referred to as a ‘taxpayer’. All of these are constraining labels, ignoring the full spectrum of citizenship.

Life is like riding a bicycle. To keep your balance, you must keep moving

Thanks to Einstein for today's quote-as-title. Having once again witnessed the joy of electric scooters in Lisbon recently, I thought I'd look at this trend of 'micromobility'.

Let's begin with Horace Dediu, who explains the term:

Simply, Micromobility promises to have the same effect on mobility as microcomputing had on computing. Bringing transportation to many more and allowing them to travel further and faster.  I use the term micromobility precisely because of the connotation with computing and the expansion of consumption but also because it focuses on the vehicle rather than the service. The vehicle is small, the service is vast.

Horace Dediu

Micromobility covers mainly electric scooters and (e-)bikes, which can be found in many of the cities I've visited over the past year. Not in the UK, though, where riding electric scooters is technically illegal. Why? Because of a 183 year-old law, explains Jeff Parsons Metro:

You can’t ride scooters on the road, because the DVLA requires that electric vehicles be registered and taxed. And you can’t ride scooters on the pavement because of the 1835 Highways Act that prohibits anyone from riding a ‘carriage’ on the pavement.

Jeff Parsons

It's only a matter of time, though, before legislation is passed to remove this anachronism. And, to be honest, I can't imagine the police with their stretched resources pulling over anyone who's using one sensibly.

Electric scooters in particular are great and, if you haven't tried one, you should. Florent Crivello, one of Uber's product managers, explains why they're not just fun, but actually valuable:

  1. Cleaner and more energy efficient
  2. More space efficient
  3. Safer
  4. Making the world city a better place
  5. Force for economic inclusion

You might be wondering about the third one of these, as I was. Crivello includes this chart:

Courtesy of Florent Crivello

Of course, as he points out, you can prevent cars running into scooters, bikes, and pedestrians by building separate lanes for them, with a high kerb in between. Countries that have done this, like the Netherlands, have seen a sharp decline in fatalities and injuries.

Despite the title, I'm focusing on electric scooters because of my enthusiasm for them and because of the huge growth since they became a thing about 18 months ago. Just look at this chart that Megan Rose Dickey includes in a recent TechCrunch article:

Chart courtesy of TechCrunch

One of the biggest downsides to electric scooters at the moment, and one which threatens the whole idea of 'micromobility' is over-supply. As this photograph in an article by Alan Taylor for The Atlantic shows, this can quickly get out-of-hand when VC-backed companies are involved:

Unused shared bikes in a vacant lot in Xiamen, Fujian province, China (photo courtesy of The Atlantic)

This can scare cities, who don't know how to deal with these kinds of potential consequences. That's why it's refreshing to see Charlotte in North Carolina lead the way by partnering with Passport, a transportation logistics company. As John R. Quain reports for Digital Trends:

“When e-scooters first came to town,” said Charlotte’s city manager Marcus Jones, “it left our shared bike program in the dust.”

[...]

By tracking scooter rentals and coordinating it with other information about public transit routes, congestion, and parking information, Passport can report on where scooters and bikes tend to be idle, where they get the most use, and how they might be deployed to serve more people. Furthermore, rather than railing against escooters, such information can help a city encourage proper use and behavior.

John R. Quain

I'm really quite excited about e-scooters, and can't wait until I can buy and use one legally in the UK!


Also check out:

That which we do not bring to consciousness appears in our lives as fate

Today's title is quotation from Carl Jung, via a recent issue of New Philosopher magazine. I thought it was a useful frame for a discussion around a few things I've been reading recently, including an untranslatable Finnish word, music and teen internet culture, as well as whether life does indeed get better once you turn forty.

Let's start with that Finnish word, discussed in Quartzy by Olivia Goldhill:

At some point in life, all of us get that unexpected call on a Tuesday afternoon that distorts our world and makes everything else irrelevant: There’s been an accident. Or, you need surgery. Or, come home now, he’s dying. We get through that time, somehow, drawing on energy reserves we never knew we had and persevering, despite the exhaustion. There’s no word in English for the specific strength it takes to pull through, but there is a word in Finnish: sisu.

Olivia Goldhill

I'm guessing Goldhill is American, as we English have a term for that: Blitz spirit. It's even been invoked as a way of getting us through the vagaries of Brexit! 🙄

Despite my flippancy, there are, of course, words that are pretty untranslatable between languages. But one thing that unites us no matter what language we speak is music. Interestingly, Alexis Petridis in The Guardian notes that there's teenage musicians making music in their bedrooms that really resonates across language barriers:

For want of a better name, you might call it underground bedroom pop, an alternate musical universe that feels like a manifestation of a generation gap: big with teenagers – particularly girls – and invisible to anyone over the age of 20, because it exists largely in an online world that tweens and teens find easy to navigate, but anyone older finds baffling or risible. It doesn’t need Radio 1 or what is left of the music press to become popular because it exists in a self-contained community of YouTube videos and influencers; some bedroom pop artists found their music spread thanks to its use in the background of makeup tutorials or “aesthetic” videos, the latter a phenomenon whereby vloggers post atmospheric videos of, well, aesthetically pleasing things.

Alexis Petridis

Some people find this scary. I find it completely awesome, but may be over-compensating now that I've passed 35 years of age. Who wants to listen to and like the same music as everyone else?

Talking of getting older, there's a saying that "life begins at forty". Well, an article in The Economist would suggest that, on average, the happiness of males in Western Europe doesn't vary that much.

The Economist: graph showing self-reported happiness levels

I'd love to know what causes that decline in the former USSR states, and the uptick in the United States? The article isn't particularly forthcoming, which is a shame.

Perhaps as you get to middle-age there's a realisation that this is pretty much going to be it for the rest of your life. In some places, if you have the respect of your family, friends, and culture, and are reasonably well-off, that's no bad thing. In other cultures, that might be a sobering thought.

One of the great things about studying Philosophy since my teenage years is that I feel very prepared for getting old. Perhaps that's what's needed here? More philosophical thinking and training? I don't think it would go amiss.


Also check out:

  • What your laptop-holding position says about you (Quartz at Work) — "Over the past few weeks, we’ve been observing Quartzians in their natural habitat and have tried to make sense of their odd office rituals in porting their laptops from one meeting to the next."
  • Meritocracy doesn’t exist, and believing it does is bad for you (Fast Company) — "Simply holding meritocracy as a value seems to promote discriminatory behavior."
  • Your Body as a Map (Sapiens) — "Reading the human body canvas is much like reading a map. But since we are social beings in complex contemporary situations, the “legend” changes depending on when and where a person looks at the map."

Fascinating Friday Facts

Here's some links I thought I'd share which struck me as interesting:


Header image: Keep out! The 100m² countries – in pictures (The Guardian)

There is no exercise of the intellect which is not, in the final analysis, useless

A quotation from a short story from Jorge Luis Borges' Labyrinths provides the title for today's article. I want to dig into the work of danah boyd and the transcript of a talk she gave recently, entitled Agnotology and Epistemological Fragmentation. It helps us understand what's going on behind the seemingly-benign fascias of social networks and news media outlets.

She explains the title of her talk:

Epistemology is the term that describes how we know what we know. Most people who think about knowledge think about the processes of obtaining it. Ignorance is often assumed to be not-yet-knowledgeable. But what if ignorance is strategically manufactured? What if the tools of knowledge production are perverted to enable ignorance? In 1995, Robert Proctor and Iain Boal coined the term “agnotology” to describe the strategic and purposeful production of ignorance. In an edited volume called Agnotology, Proctor and Londa Schiebinger collect essays detailing how agnotology is achieved. Whether we’re talking about the erasure of history or the undoing of scientific knowledge, agnotology is a tool of oppression by the powerful.

danah boyd

Having already questioned 'media literacy' the way it's currently taught through educational institutions and libraries, boyd explains how the alt-right are streets ahead of educators when it comes to pushing their agenda:

One of the best ways to seed agnotology is to make sure that doubtful and conspiratorial content is easier to reach than scientific material. And then to make sure that what scientific information is available, is undermined. One tactic is to exploit “data voids.” These are areas within a search ecosystem where there’s no relevant data; those who want to manipulate media purposefully exploit these. Breaking news is one example of this.

[...]

Today’s drumbeat happens online. The goal is no longer just to go straight to the news media. It’s to first create a world of content and then to push the term through to the news media at the right time so that people search for that term and receive specific content. Terms like caravan, incel, crisis actor. By exploiting the data void, or the lack of viable information, media manipulators can help fragment knowledge and seed doubt.

danah boyd

Harold Jarche uses McLuhan's tetrads to understand this visually, commenting: "This is an information war. Understanding this is the first step in fighting for democracy."

Harold Jarche on Agnotology

We can teach children sitting in classrooms all day about checking URLs and the provenance of the source, but how relevant is that when they're using YouTube as their primary search engine? Returning to danah boyd:

YouTube has great scientific videos about the value of vaccination, but countless anti-vaxxers have systematically trained YouTube to make sure that people who watch the Center for Disease Control and Prevention’s videos also watch videos asking questions about vaccinations or videos of parents who are talking emotionally about what they believe to be the result of vaccination. They comment on both of these videos, they watch them together, they link them together. This is the structural manipulation of media.

danah boyd

It's not just the new and the novel. Even things that are relatively obvious to those of us who have grown up as adults online are confusing to older generations. As this article by BuzzFeed News reporter Craig Silverman points out, conspiracy-believing retirees have disproportionate influence on our democratic processes:

Older people are also more likely to vote and to be politically active in other ways, such as making political contributions. They are wealthier and therefore wield tremendous economic power and all of the influence that comes with it. With more and more older people going online, and future 65-plus generations already there, the online behavior of older people, as well as their rising power, is incredibly important — yet often ignored.

Craig Silverman

So when David Buckingham asks 'Who needs digital literacy?' I think the answer is everyone. Having been a fan of his earlier work, it saddens me to realise that he hasn't kept up with the networked era:

These days, I find the notion of digital literacy much less useful – and to some extent, positively misleading. The fundamental problem is that the idea is defined by technology itself. It makes little sense to distinguish between texts (or media) on the grounds of whether they are analogue or digital: almost all media (including print media) involve the use of digital technology at some stage or other. Fake news and disinformation operate as much in old, analogue media (like newspapers) as they do online. Meanwhile, news organisations based in old media make extensive and increasing use of online platforms. The boundaries between digital and analogue may still be significant in some situations, but they are becoming ever more blurred.

David Buckingham

Actually, as Howard Rheingold pointed out a number of years ago in Net Smart, and as boyd has done in her own work, networks change everything. You can't seriously compare pre-networked and post-networked cultures in any way other than in contrast.

Buckingham suggests that, seeing as the (UK) National Literacy Trust are on the case, we "don't need to reinvent the wheel". The trouble is that the wheel has already been reinvented, and lots of people either didn't notice, or are acting as though it hasn't been.

There's a related article by Anna Mckie in the THE entitled Teaching intelligence: digital literacy in the ‘alternative facts’ era which, unfortunately, is now behind a paywall. It reports on a special issue of the journal Teaching in Higher Education where the editors have brought together papers on the contribution made by Higher Education to expertise and knowledge in the age of 'alternative facts':

[S]ocial media has changed the dynamic of information in our society, [editor] Professor Harrison added. “We've moved away from the idea of experts who assess information to one where the validity of a statement is based on the likes, retweets and shares it gets, rather than whether the information is valid.”

The first task of universities is to go back to basics and “help students to understand the difference between knowledge and information, and how knowledge is created, which is separate to how information is created”, Professor Harrison said. “Within [each] discipline, what are the skills needed to assess that?”

Many assume that schools or colleges are teaching this, but that is not the case, he added. “Academics should also be wary of the extent to which they themselves understand the new paradigms of knowledge creation,” Professor Harrison warned.

Anna McKie

One of the reasons I decided not to go into academia is that, certain notable exceptions aside, the focus is on explaining rather than changing. Or, to finish with another quotation, this time from Karl Marx, "Philosophers have hitherto only interpreted the world in various ways; the point is to change it."


Also check out:

Sometimes even to live is an act of courage

Thank you to Seneca for the quotation for today's title, which sprang to mind after reading Rosie Spinks' claim in Quartz that we've reached 'peak influencer'.

Where once the social network was basically lunch and sunsets, it’s now a parade of strategically-crafted life updates, career achievements, and public vows to spend less time online (usually made by people who earn money from social media)—all framed with the carefully selected language of a press release. Everyone is striving, so very hard.

Thank goodness for that. The selfie-obsessed influencer brigade is an insidious effect of the neoliberalism that permeates western culture:

For the internet influencer, everything from their morning sun salutation to their coffee enema (really) is a potential money-making opportunity. Forget paying your dues, or working your way up—in fact, forget jobs. Work is life, and getting paid to live your best life is the ultimate aspiration.

[...]

“Selling out” is not just perfectly OK in the influencer economy—it’s the raison d’etre. Influencers generally do not have a craft or discipline to stay loyal to in the first place, and by definition their income comes from selling a version of themselves.

As Yascha Mounk, writing in The Atlantic, explains the problem isn't necessarily with social networks. It's that you care about them. Social networks flatten everything into a never-ending stream. That stream makes it very difficult to differentiate between gossip and (for example) extremely important things that are an existential threat to democratic institutions:

“When you’re on Twitter, every controversy feels like it’s at the same level of importance,” one influential Democratic strategist told me. Over time, he found it more and more difficult to tune Twitter out: “People whose perception of reality is shaped by Twitter live in a different world and a different country than those off Twitter.”

It's easier for me to say these days that our obsession with Twitter and Instagram is unhealthy. While I've never used Instagram (because it's owned by Facebook) a decade ago I was spending hours each week on Twitter. My relationship with the service has changed as I've grown up and it has changed — especially after it became a publicly-traded company in 2013.

Twitter, in particular, now feels like a neverending soap opera similar to EastEnders. There's always some outrage or drama running. Perhaps it's better, as Catherine Price suggests in The New York Times, just to put down our smartphones?

Until now, most discussions of phones’ biochemical effects have focused on dopamine, a brain chemical that helps us form habits — and addictions. Like slot machines, smartphones and apps are explicitly designed to trigger dopamine’s release, with the goal of making our devices difficult to put down.

This manipulation of our dopamine systems is why many experts believe that we are developing behavioral addictions to our phones. But our phones’ effects on cortisol are potentially even more alarming.

Cortisol is our primary fight-or-flight hormone. Its release triggers physiological changes, such as spikes in blood pressure, heart rate and blood sugar, that help us react to and survive acute physical threats.

Depending on how we use them, social networks can stoke the worst feelings in us: emotions such as jealousy, anger, and worry. This is not conducive to healthy outcomes, especially for children where stress has a direct correlation to the take-up of addictive substances, and to heart disease in later life.

I wonder how future generations will look back at this time period?


Also check out:

Anything invented after you're thirty-five is against the natural order of things

I'm fond of the above quotation by Douglas Adams that I've used for the title of this article. It serves as a reminder to myself that I've now reached an age when I'll look at a technology and wonder: why?

Despite this, I'm quite excited about the potential of two technologies that will revolutionise our digital world both in our homes and offices and when we're out-and-about. Those technologies? Wi-Fi 6, as it's known colloquially, and 5G networks.

Let's take Wi-Fi 6 first, which Chuong Nguyen explains in an article for Digital Trends, isn't just about faster speeds:

A significant advantage for Wi-Fi 6 devices is better battery life. Though the standard promotes Internet of Things (IoT) devices being able to last for weeks, instead of days, on a single charge as a major benefit, the technology could even prove to be beneficial for computers, especially since Intel’s latest 9th-generation processors for laptops come with Wi-Fi 6 support.

Likewise, Alexis Madrigal, writing in The Atlantic, explains that mobile 5G networks bring benefits other than streaming YouTube videos at ever-higher resolutions, but are quite a technological hurdle:

The fantastic 5G speeds require higher-frequency, shorter-wavelength signals. And the shorter the wavelength, the more likely it is to be blocked by obstacles in the world.

[...]

Ideally, [mobile-associated companies] would like a broader set of customers than smartphone users. So the companies behind 5G are also flaunting many other applications for these networks, from emergency services to autonomous vehicles to every kind of “internet of things” gadget.

If you've been following the kerfuffle around the UK using Huawei's technology for its 5G infrastructure, you'll already know about the politics and security issues at stake here.

Sue Halpern, writing in The New Yorker, outlines the claimed benefits:

Two words explain the difference between our current wireless networks and 5G: speed and latency. 5G—if you believe the hype—is expected to be up to a hundred times faster. (A two-hour movie could be downloaded in less than four seconds.) That speed will reduce, and possibly eliminate, the delay—the latency—between instructing a computer to perform a command and its execution. This, again, if you believe the hype, will lead to a whole new Internet of Things, where everything from toasters to dog collars to dialysis pumps to running shoes will be connected. Remote robotic surgery will be routine, the military will develop hypersonic weapons, and autonomous vehicles will cruise safely along smart highways. The claims are extravagant, and the stakes are high. One estimate projects that 5G will pump twelve trillion dollars into the global economy by 2035, and add twenty-two million new jobs in the United States alone. This 5G world, we are told, will usher in a fourth industrial revolution.

But greater speeds and lower latency isn't all upside for all members of societies, as I learned in this BBC Beyond Today podcast episode about Korean spy cam porn. Halpern explains:

In China, which has installed three hundred and fifty thousand 5G relays—about ten times more than the United States—enhanced geolocation, coupled with an expansive network of surveillance cameras, each equipped with facial-recognition technology, has enabled authorities to track and subordinate the country’s eleven million Uighur Muslims. According to the Times, “the practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism.”

Automated racism, now there's a thing. It turns out that technologies amplify our existing prejudices. Perhaps we should be a bit more careful and ask more questions before we march down the road of technological improvements? Especially given 5G could affect our ability to predict major storms. I'm reading Low-tech Magazine: The Printed Website at the moment, and it's pretty eye-opening about what we could be doing instead.


Also check out:

The smallest deed is better than the greatest intention

Thanks to John Burroughs for today's title. For me, it's an oblique reference to some of the situations I find myself in, both in my professional and personal life. After all, words are cheap and actions are difficult.

I'm going to take the unusual step of quoting someone who's quoting me. In this case, it's Stephen Downes picking up on a comment I made in the cc-openedu Google Group. I'd link directly to my comments, but for some reason a group about open education is... closed?

I'd like to echo a point David Kernohan made when I worked with him on the Jisc OER programme. He said: "OER is a supply-side term". Let's face it, there are very few educators specifically going out and looking for "Openly Licensed Resources". What they actuallywant are resources that they can access for free (or at a low cost) and that they can legally use. We've invented OER as a term to describe that, but it may actually be unhelpfully ambiguous.

Shortly after posting that, I read this post from Sarah Lambert on the GO-GN (Global OER Graduate Network) blog. She says:

[W]hile we’re being all inclusive and expanding our “open” to encompass any collaborative digital practice, then our “open” seems to be getting less and less distinctive. To the point where it’s getting quite easily absorbed by the mainstream higher education digital learning (eLearning, Technology Enhanced Learning, ODL, call it what you will). Is it a win for higher education to absorb and assimilate “open” (and our gift labour) as the latest innovation feeding the hungry marketised university that Kate Bowles spoke so eloquently about? Is it a problem if not only the practice, but the research field of open education becomes inseparable with mainstream higher education digital learning research?

My gloss on this is that 'open education' may finally have moved into the area of productive ambiguity. I talked about this back in 2016 in a post on a blog I post to only very infrequently, so I might as well quote myself again:

Ideally, I’d like to see ‘open education’ move into the realm of what I term productive ambiguity. That is to say, we can do some workwith the idea and start growing the movement beyond small pockets here and there. I’m greatly inspired by Douglas Rushkoff’s new Team Human podcast at the moment, feeling that it’s justified the stance that I and others have taken for using technology to make us more human (e.g. setting up a co-operative) and against the reverse (e.g. blockchain).

That's going to make a lot of people uncomfortable, and hopefully uncomfortable enough to start exploring new, even better areas. 'Open Education' now belongs, for better or for worse, to the majority. Whether that's 'Early majority' or 'Late majority' on the innovation adoption lifecycle curve probably depends where in the world you live.

Diffusion of innovation curve
CC BY Pnautilus (Wikipedia)

Things change and things move on. The reason I used that xkcd cartoon about IRC at the top of this post is because there has been much (OK, some) talk about Mozilla ending its use of IRC.

While we still use it heavily, IRC is an ongoing source of abuse and harassment for many of our colleagues and getting connected to this now-obscure forum is an unnecessary technical barrier for anyone finding their way to Mozilla via the web. Available interfaces really haven’t kept up with modern expectations, spambots and harassment are endemic to the platform, and in light of that it’s no coincidence that people trying to get in touch with us from inside schools, colleges or corporate networks are finding that often as not IRC traffic isn’t allowed past institutional firewalls at all.

Cue much hand-wringing from the die-hards in the Mozilla community. Unfortunately, Slack, which originally had a bridge/gateway for IRC has pulled up the drawbridge on that front, so they could go with something like Mattermost, but given recently history I bet they go with Discord (or similar).

As Seth Godin points out in his most recent podcast episode, everyone wants be described as 'supple', nobody wants to be described as 'brittle'. Yet, the actions we take suggest otherwise. We expect that just because the change we see in the world isn't convenient, that we can somehow slow it down. Nope, you just have to roll with it, whether that's changing technologies, or different approaches to organising ideas and people.


Also check out:

  • Do Experts Listen to Other Experts? (Marginal Revolution) —"very little is known about how experts influence each others’ opinions, and how that influence affects final evaluations."
  • Why Symbols Aren’t Forever (Sapiens) — "The shifting status of cultural symbols reveals a lot about who we are and what we value."
  • Balanced Anarchy or Open Society? (Kottke.org) — "Personal computing and the internet changed (and continues to change) the balance of power in the world so much and with such speed that we still can’t comprehend it."

A little Friday randomness

Not everything I read and bookmark to come back to is serious. So here for the sake of a little levity, are some things I've discovered recently that either made me smile, or think "that's cool":


Header image: xkcd

Educational institutions are at a crossroads of relevance

One of the things that attracted me to the world of Open Badges and digital credentialing back in 2011 was the question of relevance. As a Philosophy graduate, I'm absolutely down with the idea of a broad, balanced education, and learning as a means of human flourishing.

However, in a world where we measure schools, colleges, and universities through an economic lens, it's inevitable that learners do so too. As I've said in presentations and to clients many times, I want my children to choose to go to university because it's the right choice for them, not because they have to.

In an article in Forbes, Brandon Busteed notes that we're on the verge of a huge change in Higher Education:

This shift will go down as the biggest disruption in higher education whereby colleges and universities will be disintermediated by employers and job seekers going direct. Higher education won’t be eliminated from the model; degrees and other credentials will remain valuable and desired, but for a growing number of young people they’ll be part of getting a job as opposed to college as its own discrete experience. This is already happening in the case of working adults and employers that offer college education as a benefit. But it will soon be true among traditional age students. Based on a Kaplan University Partners-QuestResearch study I led and which was released today, I predict as many as one-third of all traditional students in the next decade will "Go Pro Early" in work directly out of high school with the chance to earn a college degree as part of the package.

This is true to some degree in the UK as well, through Higher Apprenticeships. University study becomes a part-time deal with the 'job' paying for fees. It's easy to see how this could quickly become a two-tier system for rich and poor.

A "job-first, college included model" could well become one of the biggest drivers of both increasing college completion rates in the U.S. and reducing the cost of college. In the examples of employers offering college degrees as benefits, a portion of the college expense will shift to the employer, who sees it as a valuable talent development and retention strategy with measurable return on investment benefits. This is further enhanced through bulk-rate tuition discounts offered by the higher educational institutions partnering with these employers. Students would still be eligible for federal financial aid, and they’d be making an income while going to college. To one degree or another, this model has the potential to make college more affordable for more people, while lowering or eliminating student loan debt and increasing college enrollments. It would certainly help bridge the career readiness gap that many of today’s college graduates encounter.

The 'career readiness' that Busteed discusses here is an interesting concept, and one that I think has been invented by employers who don't want to foot the bill for training. Certainly, my parents' generation weren't supposed to be immediately ready for employment straight after their education — and, of course, they weren't saddled with student debt, either.

Related, in my mind, is the way that we treat young people as data to be entered on a spreadsheet. This is managerialism at its worst. Back when I was a teacher and a form tutor, I remember how sorry I felt for the young people in my charge, who were effectively moved around a machine for 'processing' them.

Now, in an article for The Guardian, Jeremy Hannay tells it like it is for those who don't have an insight into the Kafkaesque world of schools:

Let me clear up this edu-mess for you. It’s not Sats. It’s not workload. The elephant in the room is high-stakes accountability. And I’m calling bullshit. Our education system actively promotes holding schools, leaders and teachers at gunpoint for a very narrow set of test outcomes. This has long been proven to be one of the worst ways to bring about sustainable change. It is time to change this educational paradigm before we have no one left in the classroom except the children.

Just like our dog-eat-dog society in the UK could be much more collaborative, so our education system badly needs remodelling. We've deprofessionalised teaching, and introduced a managerial culture. Things could be different, as they are elsewhere in the world.

In such systems – and they do exist in some countries, such as Finland and Canada, and even in some brave schools in this country – development isn’t centred on inspection, but rather professional collaboration. These schools don’t perform regular observations and monitoring, or fire out over-prescriptive performance policies. Instead, they discuss and design pedagogy, engage in action research, and regularly perform activities such as learning and lesson study. Everyone understands that growing great educators involves moments of brilliance and moments of mayhem.

That's the key: "moments of brilliance and moments of mayhem". Ironically, bureaucratic, hierarchical systems cannot cope with amazing teachers, because they're to some extent unpredictable. You can't put them in a box (on a spreadsheet).

Actually, perhaps it's not the hierarchy per se, but the power dynamics, as Richard D. Bartlett points out in this post.

Yes, when a hierarchical shape is applied to a human group, it tends to encourage coercive power dynamics. Usually the people at the top are given more importance than the rest. But the problem is the power, not the shape. 

What we're doing is retro-fitting the worst forms of corporate power dynamics onto education and expecting everything to be fine. Newsflash: learning is different to work, and always will be.

Interestingly, Bartlett defines three different forms of power dynamics, which I think is enlightening:

Follett coined the terms “power-over” and “power-with” in 1924. Starhawk adds a third category “power-from-within”. These labels provide three useful lenses for analysing the power dynamics of an organisation. With apologies to the original authors, here’s my definitions:

  • power-from-within or empowerment — the creative force you feel when you’re making art, or speaking up for something you believe in
  • power-with or social power — influence, status, rank, or reputation that determines how much you are listened to in a group
  • power-over or coercion — power used by one person to control another

The problem with educational institutions, I feel, is that we've largely done away with empowerment and social power, and put all of our eggs in the basket of coercion.


Also check out:

  • Working collaboratively and learning cooperatively (Harold Jarche) — "Two types of behaviours are necessary in the network era workplace — collaboration and cooperation. Cooperation is not the same as collaboration, though they are complementary."
  • Learning Alignment Model (Tom Barrett) - "It is not a step by step process to design learning, but more of a high-level thinking model to engage with that uncovers some interesting potential tensions in our classroom work."
  • A Definition of Academic Innovation (Inside Higher Ed) - "What if academic innovation was built upon the research and theory of our field, incorporating social constructivist, constructionist and activity theory?"

Remote work is a different beast

You might not work remotely right now, but the chances are that at some point in your career, and in some capacity, you will do. Remote work has its own challenges and benefits, which are alluded to in three articles in Fast Company that I want to highlight. The first is an article summarising a survey Google performed amongst 5,600 of its remote workers.

On the outset of the study, the team hypothesized that distributed teams might not be as productive as their centrally located counterparts. “We were a little nervous about that,” says [Veronica] Gilrane [manager of Google’s People Innovation Lab]. She was surprised to find that distributed teams performed just as well. Unfortunately, she also found that there is a lot more frustration involved in working remotely. Workers in other offices can sometimes feel burdened to sync up their schedules with the main office. They can also feel disconnected from the team.

That doesn't surprise me at all. Even though probably spend less AFK (Away From Keyboard) as a remote worker than I would in an office, there's not that performative element, where you have to look like you're working. Sometimes work doesn't look like work; it looks like going for a run to think about a problem, or bouncing an idea off a neighbour as you walk back to your office with a cup of tea.

The main thing, as this article points out, is that it's really important to have an approach that focuses on results rather than time spent doing the work. You do have to have some process, though:

[I]t’s imperative that you stress disciplinary excellence; workers at home don’t have a manager peering over their shoulder, so they have to act as their own boss and maintain a strict schedule to get things done. Don’t try to dictate every aspect of their lives–remote work is effective because it offers workers flexibility, after all. Nonetheless, be sure that you’re requesting regular status updates, and that you have a system in place to measure productivity.

Fully-remote working is different to 'working from home' a day or two per week. It does take discipline, if only to stop raiding the biscuit tin. But it's also a different mindset, including intentionally sharing your work much more than you'd do in a co-located setting.

Fundamentally, as Greg Galant, CEO of a full-remote organisation, comments, it's about trust:

“My friends always say to me, ‘How do you know if anyone is really working?’ and I always ask them, ‘How do you know if anybody is really working if they are at the office?'” says Galant. “Because the reality is, you can see somebody at their desk and they can stay late, but that doesn’t mean they’re really working.”

[...]

If managers are adhering to traditional management practices, they’re going to feel anxiety with remote teams. They’re going to want to check in constantly to make sure people are working. But checking in constantly prevents work from getting done.

Remote work is strange and difficult to describe to anyone who hasn't experienced it. You can, for example, in the same day feel isolated and lonely, while simultaneously getting annoyed with all of the 'pings' and internal communication coming at you.

At the end of the day, companies need to set expectations, and remote workers need to set boundaries. It's the only way to avoid burnout, and to ensure that what can be a wonderful experience doesn't turn into a nightmare.


Also check out:

  • 5 Great Resources for Remote Workers (Product Hunt) — "If you’re a remote worker or spend part of your day working from outside of the office, the following tools will help you find jobs, discover the best cities for remote workers, and learn from people who have built successful freelance careers or location-independent companies."
  • Stop Managing Your Remote Workers As If They Work Onsite (ThinkGrowth) — "Managers need to back away from their conventional views of what “working hard” looks like and instead set specific targets, explain what success looks like, and trust the team to get it done where, when, and however works best for them."
  • 11 Tools That Allow us to Work from Anywhere on Earth as a Distributed Company (Ghost) —"In an office, the collaboration tools you use are akin to a simple device like a screwdriver. They assist with difficult tasks and lessen the amount of effort required to complete them. In a distributed team, the tools you use are more like life-support. Everything to do with distributed team tools is about clawing back some of that contextual awareness which you've lost by not being in the same space."

Culture eats strategy for breakfast

The title of this post is a quotation from management consultant, educator, and author Peter Drucker. Having worked in a variety of organisations, I can attest to its truth.

That's why, when someone shared this post by Grace Krause, which is basically a poem about work culture, I paid attention. Entitled Appropriate Channels, here's a flavour:

We would like to remind you all
That we care deeply
About our staff and our students
And in no way do we wish to silence criticism
But please make use of the
Appropriate Channels

The Appropriate Channel is tears cried at home
And not in the workplace
Please refrain from crying at your desk
As it might lower the productivity of your colleagues

Organisational culture is difficult because of the patriarchy. I selected this part of the poem, as I've come to realise just how problematic it is to let people know (through words, actions, or policies) that it's not OK to cry at work. If we're to bring our full selves to work, then emotion is part of it.

Any organisation has a culture, and that culture can be changed, for better or for worse. Restaurants are notoriously toxic places to work, which is why this article in Quartz, is interesting:

Since four-time James Beard award winner Gabrielle Hamilton opened Prune’s doors in 1999, she, along with her co-chef Ashley Merriman, have established a set of principles that help guide employees at the restaurant. According to Hamilton and Merriman, the code has a kind of transformative power. It’s helped the kitchen avoid becoming a hierarchical, top-down fiefdom—a concentration of power that innumerable chefs have abused in the past. It can turn obnoxious, entitled patrons into polite diners who are delighted to have a seat at the table. And it’s created the kind of environment where Hamilton and Merriman, along with their staff, want to spend much of their day.

The five core values of their restaurant, which I think you could apply to any organisation, are:

  1. Be thorough and excellent in everything that you do
  2. Be smart and funny
  3. Be disarmingly honest
  4. Work without division of any kind
  5. Practise servant leadership

We live in the 'age of burnout', according to another article in Quartz, but there's no reason why we can't love the work we do. It's all about finding the meaning behind the stuff we get done on a daily basis:

Our freedom to make meaning is both a blessing and a curse. To get somewhat existential about it, “work,” and the problems associated with it as an amorphous whole, do not exist: For the individual, only his or her work exists, and the individual is in control of that, with the very real power radically to change the situation. You could start the process of changing your job right now, today. Yes, arguments about the practicality of that choice well up fast and high. Yes, you would have to find another way to pay the bills. That doesn’t negate the fact that, fundamentally, you are free.

It's important to remember this, that we choose to do the work we do, that we don't have to work for a single employer, and that we can tell a different story about ourselves at any point we choose. It might not be easy, but it's certainly doable.


Also check out:

Things that people think are wrong (but aren't)

I've collected a bunch of diverse articles that seem to be around the topic of things that people think are wrong, but aren't really. Hence the title.

I'll start with something that everyone over a certain age seems to have a problem with, except for me: sleep. BBC Health lists five sleep myths:

  1. You can cope on less than five hours' sleep
  2. Alcohol before bed boosts your sleep
  3. Watching TV in bed helps you relax
  4. If you're struggling to sleep, stay in bed
  5. Hitting the snooze button
  6. Snoring is always harmless

My smartband regularly tells me that I sleep better than 93% of people, and I think that's because of how much I prioritise sleep. I've also got a system, which I've written about before for the times when I do have a rough night.

I like routine, but I also like mixing things up, which is why I appreciate chunks of time at home interspersed with travel. Oliver Burkeman, writing in The Guardian, suggests, however, that routines aren't the be-all and end-all:

Some people are so disorganised that a strict routine is a lifesaver. But speaking as a recovering rigid-schedules addict, trust me: if you click excitedly on each new article promising the perfect morning routine, you’re almost certainly not one of those people. You’re one of the other kind – people who’d benefit from struggling less to control their day, responding a bit more intuitively to the needs of the moment. This is the self-help principle you might call the law of unwelcome advice: if you love the idea of implementing a new technique, it’s likely to be the opposite of what you need.

Expecting something new to solve an underlying problem is a symptom of our culture's focus on the new and novel. While there's so much stuff out there we haven't experienced, should we spend our lives seeking it out to the detriment of the tried and tested, the things that we really enjoy?

On the recommendation of my wife, I recently listened to a great episode of the Off Menu podcast featuring Victoria Cohen Mitchell. It's not only extremely entertaining, but she mentions how, for her, a nice Ploughman's lunch is better than some fancy meal.

This brings me to an article in The Atlantic by Joe Pinsker, who writes that kids who watch and re-watch the same film might be on to something:

In general, psychological and behavioral-economics research has found that when people make decisions about what they think they’ll enjoy, they often assign priority to unfamiliar experiences—such as a new book or movie, or traveling somewhere they’ve never been before. They are not wrong to do so: People generally enjoy things less the more accustomed to them they become. As O’Brien [professor at the University of Chicago’s Booth School of Business] writes, “People may choose novelty not because they expect exceptionally positive reactions to the new option, but because they expect exceptionally dull reactions to the old option.” And sometimes, that expected dullness might be exaggerated.

So there's something to be said for re-reading novels you read when you were younger instead of something shortlisted for a prize, or discounted in the local bookshop. I found re-reading Dostoevsky's Crime & Punishment recently exhilarating as I probably hadn't ready it since I became a parent. Different periods of your life put different spins on things that you think you already know.


Also check out:

  • The ‘Dark Ages’ Weren’t As Dark As We Thought (Literary Hub) — "At the back of our minds when thinking about the centuries when the Roman Empire mutated into medieval Europe we are unconsciously taking on the spurious guise of specific communities."
  • An Easy Mode Has Never Ruined A Game (Kotaku) — "There are myriad ways video games can turn the dials on various systems to change our assessment of how “hard” they seem, and many developers have done as much without compromising the quality or integrity of their games."
  • Millennials destroyed the rules of written English – and created something better (Mashable) — "For millennials who conduct so many of their conversations online, this creativity with written English allows us to express things that we would have previously only been conveyed through volume, cadence, tone, or body language."


Cutting the Gordian knot of 'screen time'

Let's start this with an admission: my wife and I limit our children's time on their tablets, and they're only allowed on our games console at weekends. Nevertheless, I still maintain that wielding 'screen time' as a blunt instrument does more harm than good.

There's a lot of hand-wringing on this subject, especially around social skills and interaction. Take a recent article in The Guardian, for example, where Peter Fonagy, who is a professor of Contemporary Psychoanalysis and Developmental Science at UCL, comments:

“My impression is that young people have less face-to-face contact with older people than they once used to. The socialising agent for a young person is another young person, and that’s not what the brain is designed for.

“It is designed for a young person to be socialised and supported in their development by an older person. Families have fewer meals together as people spend more time with friends on the internet. The digital is not so much the problem – it’s what the digital pushes out.”

I don't disagree that we all need a balance here, but where's the evidence? On balance, I spend more time with my children than my father spent with my sister and I, yet my wife, two children and me probably have fewer mealtimes sat down at a table together than I did with my parents and sister. Different isn't always worse, and in our case it's often due to their sporting commitments.

So I'd agree with Jordan Shapiro who writes that the World Health Organisation's guidelines on screen time for kids isn't particularly useful. He quotes several sources that dismiss the WHO's recommendations:

Andrew Przybylski, the Director of Research at the Oxford Internet Institute, University of Oxford, said: “The authors are overly optimistic when they conclude screen time and physical activity can be swapped on a 1:1 basis.” He added that, “the advice overly focuses on quantity of screen time and fails to consider the content and context of use. Both the American Academy of Pediatricians and the Royal College of Paediatrics and Child Health now emphasize that not all screen time is created equal.”

That being said, parents still need some guidance. As I've said before, my generation of parents are the first ones having to deal with all of this, so where do we turn for advice?

An article by Roja Heydarpour suggests three strategies, including one from Mimi Ito who I hold in the utmost respect for her work around Connected Learning:

“Just because [kids] may meet an unsavory person in the park, we don’t ban them from outdoor spaces,” said Mimi Ito, director of the Connected Learning Lab at University of California-Irvine, at the 10th annual Women in the World Summit on Thursday. After years of research, the mother of two college-age children said she thinks parents need to understand how important digital spaces are to children and adjust accordingly.

Taking away access to these spaces, she said, is taking away what kids perceive as a human right. Gaming is like the proverbial water cooler for many boys, she said. And for many girls, social media can bring access to friends and stave off social isolation. “We all have to learn how to regulate our media consumption,” Ito said. “The longer you delay kids being able to use those muscles, the longer you delay kids learning how to regulate.”

I feel a bit bad reading that, as we've recently banned my son from the game Fortnite, which we felt was taking over his life a little too much. It's not forever, though, and he does have to find that balance between it having a place in his life and literally talking about it all of the freaking time.

One authoritative voice in the area is my friend and sometimes collaborator Ian O'Byrne, who, together with Kristen Hawley Turner, has created screentime.me which features a blog, podcast, and up-to-date research on the subject. Well worth checking out!


Also check out:

  • Teens 'not damaged by screen time', study finds (BBC Technology) — "The analysis is robust and suggests an overall population effect too small to warrant consideration as a public health problem. They also question the widely held belief that screens before bedtime are especially bad for mental health."
  • Human Contact Is Now a Luxury Good (The New York Times) — "The rich have grown afraid of screens. They want their children to play with blocks, and tech-free private schools are booming. Humans are more expensive, and rich people are willing and able to pay for them. Conspicuous human interaction — living without a phone for a day, quitting social networks and not answering email — has become a status symbol."
  • NHS sleep programme ‘life changing’ for 800 Sheffield children each year (The Guardian) — "Families struggling with children’s seriously disrupted sleep have seen major improvements by deploying consistent bedtimes, banning sugary drinks in the evening and removing toys and electronics from bedrooms."

The benefits of Artificial Intelligence

As an historian, I’m surprisingly bad at recalling facts and dates. However, I’d argue that the study of history is actually about the relationship between those facts and dates — which, let’s face it, so long as you’re in the right ballpark, you can always look up.

Understanding the relationship between things, I’d argue, is a demonstration of higher-order competence. This is described well by the SOLO Taxonomy, which I featured in my ebook on digital literacies:

SOLO Taxonomy

This is important, as it helps to explain two related concepts around which people often get confused: ‘artificial intelligence’ and ‘machine learning’. If you look at the diagram above, you can see that the ‘Extended Abstract’ of the SOLO taxonomy also includes the ‘Relational’ part. Similarly, the field of ‘artificial intelligence’ includes ‘machine learning’.

There are some examples of each in this WIRED article, but for the purposes of this post let’s just leave it there. Some of what I want to talk about here involves machine learning and some artificial intelligence. It’s all interesting and affects the future of tech in education and society.

If you’re a gamer, you’ll already be familiar with some of the benefits of AI. No longer are ‘CPU players’ dumb, but actually play a lot like human players. That means with no unfair advantages programmed in by the designers of the game, the AI can work out strategies to defeat opponents. The recent example of OpenAI Five beating the best players at a game called Dota 2, and then internet teams finding vulnerabilities in the system, is a fascinating battle of human versus machine:

“Beating OpenAI Five is a testament to human tenacity and skill. The human teams have been working together to get those wins. The way people win is to take advantage of every single weakness in Five—some coming from the few parts of Five that are scripted rather than learned—gradually build up resources, and most importantly, never engage Five in a fair fight.” OpenAI co-founder Greg Brockman told Motherboard.
Deepfakes, are created via "a technique for human image synthesis based on artificial intelligence... that can depict a person or persons saying things or performing actions that never occurred in reality". There's plenty of porn, of course, but also politically-motivated videos claiming that people said things they never did.

There’s benefits here, though, too. Recent AI research shows how, soon, it will be possible to replace any game character with one created from your own videos. In other words, you will be able to be in the game!

It only took a few short videos of each activity -- fencing, dancing and tennis -- to train the system. It was able to filter out other people and compensate for different camera angles. The research resembles Adobe's "content-aware fill" that also uses AI to remove elements from video, like tourists or garbage cans. Other companies, like NVIDIA, have also built AI that can transform real-life video into virtual landscapes suitable for games.
It's easy to be scared of all of this, fearful that it's going to ravage our democratic institutions and cause a meltdown of civilisation. But, actually, the best way to ensure that it's not used for those purposes is to try and understand it. To play with it. To experiment.

Algorithms have already been appointed to the boards of some companies and, if you think about it, there’s plenty of job roles where automated testing is entirely normal. I’m looking forward to a world where AI makes our lives a whole lot easier and friction-free.


Also check out:

  • AI generates non-stop stream of death metal (Engadget) — "The result isn't entirely natural, if simply because it's not limited by the constraints of the human body. There are no real pauses. However, it certainly sounds the part you'll find plenty of hyper-fast drums, guitar thrashing and guttural growling."
  • How AI Will Turn Us All Into Filmmakers (WIRED) "AI-assisted editing won’t make Oscar-­worthy auteurs out of us. But amateur visual storytelling will probably explode in complexity."
  • Experts Weigh in on Merits of AI in Education (THE Journal) — "AI systems are perfect for analyzing students’ progress, providing more practice where needed and moving on to new material when students are ready," she stated. "This allows time with instructors to focus on more complex learning, including 21st-century skills."

The drawbacks of Artificial Intelligence

It’s really interesting to do philosophical thought experiments with kids. For example, the trolley problem, a staple of undergradate Philosophy courses, is also accessible to children from a fairly young age.

You see a runaway trolley moving toward five tied-up (or otherwise incapacitated) people lying on the tracks. You are standing next to a lever that controls a switch. If you pull the lever, the trolley will be redirected onto a side track, and the five people on the main track will be saved. However, there is a single person lying on the side track. You have two options:
  1. Do nothing and allow the trolley to kill the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the more ethical option?
With the advent of autonomous vehicles, these are no longer idle questions. The vehicles, which have to make split-second decision, may have to decide whether to hit a pram containing a baby, or swerve and hit a couple of pensioners. Due to cultural differences, even that's not something that can be easily programmed, as the diagram below demonstrates. Self-driving cards: pedestrians vs passengers

For two countries that are so close together, it’s really interesting that Japan and China are on the opposite ends of the spectrum when it comes to saving passengers or pedestrians!

The authors of the paper cited in the article are careful to point out that countries shouldn’t simply create laws based on popular opinion:

Edmond Awad, an author of the paper, brought up the social status comparison as an example. “It seems concerning that people found it okay to a significant degree to spare higher status over lower status,” he said. “It's important to say, ‘Hey, we could quantify that’ instead of saying, ‘Oh, maybe we should use that.’” The results, he said, should be used by industry and government as a foundation for understanding how the public would react to the ethics of different design and policy decisions.
This is why we need more people with a background in the Humanities in tech, and be having a real conversation about ethics and AI.

Of course, that’s easier said than done, particularly when those companies who are in a position to make significant strides in this regard have near-monopolies in their field and are pulling in eye-watering amounts of money. A recent example of this, where Google convened an AI ethics committee was attacked as a smokescreen:

Academic Ben Wagner says tech’s enthusiasm for ethics paraphernalia is just “ethics washing,” a strategy to avoid government regulation. When researchers uncover new ways for technology to harm marginalized groups or infringe on civil liberties, tech companies can point to their boards and charters and say, “Look, we’re doing something.” It deflects criticism, and because the boards lack any power, it means the companies don’t change.

 [...]

“It’s not that people are against governance bodies, but we have no transparency into how they’re built,” [Rumman] Chowdhury [a data scientist and lead for responsible AI at management consultancy Accenture] tells The Verge. With regard to Google’s most recent board, she says, “This board cannot make changes, it can just make suggestions. They can’t talk about it with the public. So what oversight capabilities do they have?”

As we saw around privacy, it takes a trusted multi-national body like the European Union to create a regulatory framework like GDPR for these issues. Thankfully, they've started that process by releasing guidelines containing seven requirements to create trustworthy AI:
  1. Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  2. Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  3. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  4. Transparency: The traceability of AI systems should be ensured.
  5. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  6. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
The problem isn't that people are going out of their way to build malevolent systems to rob us of our humanity. As usual, bad things happen because of more mundane requirements. For example, The Guardian has recently reported on concerns around predictive policing and hospitals using AI to predict everything from no-shows to risk of illness.

When we throw facial recognition into the mix, things get particularly scary. It’s all very well for Taylor Swift to use this technology to identify stalkers at her concerts, but given its massive drawbacks, perhaps we should restrict facial recognition somehow?

Human bias can seep into AI systems. Amazon abandoned a recruiting algorithm after it was shown to favor men’s resumes over women’s; researchers concluded an algorithm used in courtroom sentencing was more lenient to white people than to black people; a study found that mortgage algorithms discriminate against Latino and African American borrowers.
Facial recognition might be a cool way to unlock your phone, but the kind of micro-expressions that made for great television in the series Lie to Me is now easily exploited in what is expected to become a $20bn industry.

The difficult thing with all of this is that it’s very difficult for us as individuals to make a difference here. The problem needs to be tackled at a much higher level, as with GDPR. That will take time, and meanwhile the use of AI is exploding. Be careful out there.


Also check out:

Opting in and out of algorithms

It's now over seven years since I submitted my doctoral thesis on digital literacies. Since then, almost the entire time my daughter has been alive, the world has changed a lot.

Writing in The Conversation, Anjana Susarla explains her view that digital literacy goes well beyond functional skills:

In my view, the new digital literacy is not using a computer or being on the internet, but understanding and evaluating the consequences of an always-plugged-in lifestyle. This lifestyle has a meaningful impact on how people interact with others; on their ability to pay attention to new information; and on the complexity of their decision-making processes.

Digital literacies are plural, context-dependent and always evolving. Right now, I think Susarla is absolutely correct to be focusing on algorithms and the way they interact with society. Ben Williamson is definitely someone to follow and read up on in that regard.

Over the past few years I've been trying (both directly and indirectly) to educate people about the impact of algorithms on everything from fake news to privacy. It's one of the reasons I don't use Facebook, for example, and go out of my way to explain to others why they shouldn't either:

A study of Facebook usage found that when participants were made aware of Facebook’s algorithm for curating news feeds, about 83% of participants modified their behavior to try to take advantage of the algorithm, while around 10% decreased their usage of Facebook.

[...]

However, a vast majority of platforms do not provide either such flexibility to their end users or the right to choose how the algorithm uses their preferences in curating their news feed or in recommending them content. If there are options, users may not know about them. About 74% of Facebook’s users said in a survey that they were not aware of how the platform characterizes their personal interests.

Although I'm still not going to join Facebook, one reason I'm a little more chilled out about algorithms and privacy these days is because of the GDPR. If it's regulated effectively (as I think it will be) then it should really keep Big Tech in check:

As part of the recently approved General Data Protection Regulation in the European Union, people have “a right to explanation” of the criteria that algorithms use in their decisions. This legislation treats the process of algorithmic decision-making like a recipe book. The thinking goes that if you understand the recipe, you can understand how the algorithm affects your life.

[...]

But transparency is not a panacea. Even when an algorithm’s overall process is sketched out, the details may still be too complex for users to comprehend. Transparency will help only users who are sophisticated enough to grasp the intricacies of algorithms.

I agree that it's not enough to just tell people that they're being tracked without them being able to do something about it. That leads to technological defeatism. We need a balance between simple, easy-to-use tools that enable user privacy and security. These aren't going to come through tech industry self-regulation, but through regulatory frameworks like GDPR.

Source: The Conversation


Also check out:

Let's not force children to define their future selves through the lens of 'work'

I discovered the work of Adam Grant through Jocelyn K. Glei's excellent Hurry Slowly podcast. He has his own, equally excellent podcast, called WorkLife which he creates with the assistance of TED.

Writing in The New York Times as a workplace psychologist, Grant notes just how problematic the question "what do you want to be when you grow up?" actually is:

When I was a kid, I dreaded the question. I never had a good answer. Adults always seemed terribly disappointed that I wasn’t dreaming of becoming something grand or heroic, like a filmmaker or an astronaut.

Let's think: from what I can remember, I wanted to be a journalist, and then an RAF pilot. Am I unhappy that I'm neither of these things? No.

Perhaps it's because a job is more tangible than an attitude or approach to life, but not once can I remember being asked what kind of person I wanted to be. It was always "what do you want to be when you grow up?", and the insinuation was that the answer was job-related.

My first beef with the question is that it forces kids to define themselves in terms of work. When you’re asked what you want to be when you grow up, it’s not socially acceptable to say, “A father,” or, “A mother,” let alone, “A person of integrity.”

[...]

The second problem is the implication that there is one calling out there for everyone. Although having a calling can be a source of joy, research shows that searching for one leaves students feeling lost and confused.

Another fantastic podcast episode I listened to recently was Tim Ferriss' interview of Caterina Fake. She's had an immensely successful career, yet her key messages during that conversation were around embracing your 'shadow' (i.e. melancholy, etc.) and ensuring that you have a rich inner life.

While the question beloved of grandparents around the world seems innocuous enough, these things have material effects on people's lives. Children are eager to please, and internalise other people's expectations.

I’m all for encouraging youngsters to aim high and dream big. But take it from someone who studies work for a living: those aspirations should be bigger than work. Asking kids what they want to be leads them to claim a career identity they might never want to earn. Instead, invite them to think about what kind of person they want to be — and about all the different things they might want to do.

The jobs I've had over the last decade didn't really exist when I was a child, so it would have been impossible to point to them. Let's encourage children to think of the ways they can think and act to change the world for the better - not how they're going to pay the bills to enable themselves to do so.

Source: The New York Times


Also check out:

  • The Creeping Capitalist Takeover of Higher Education (Highline) — "As our most trusted universities continue to privatize large swaths of their academic programs, their fundamental nature will be changed in ways that are hard to reverse. The race for profits will grow more heated, and the social goal of higher education will seem even more like an abstraction."
  • Social Peacocking and the Shadow (Caterina Fake) — "Social peacocking is life on the internet without the shadow. It is an incomplete representation of a life, a half of a person, a fraction of the wholeness of a human being."
  • Why and How Capitalism needs to be reformed (Economic Principles) — "The problem is that capitalists typically don’t know how to divide the pie well and socialists typically don’t know how to grow it well."

How to subscribe to Thought Shrapnel Daily

From Monday I'll be publishing Thought Shrapnel Daily five times per week. Patreon supporters get immediate and exclusive access to those updates for a one-week period after publication. They'll then be available to everyone on the open web.

Here's how you can be informed when Thought Shrapnel Daily is published:

Alternatively, you can just check thoughtshrapnel.com every day after 12pm UTC! Note that you'll need to be logged-in to Patreon to access Thought Shrapnel Daily when it's first published.


A visit to the Tate Modern by Bryan Mathers is licenced under CC-BY-ND

Giving up Thought Shrapnel for Lent

Recently, the Slack-based book club I started has been reading Cal Newport’s Digital Minimalism. His writing made me consider giving up my smartphone for Lent as a form of ‘digital detox’. However, when I sat with the idea a while, another one replaced it: give up Thought Shrapnel for Lent instead!

Why?

Putting together Thought Shrapnel is something that I certainly enjoy doing, but something that takes me away from other things, once you’ve factored in all of the reading, writing, and curating involved in putting out several weekly posts and a newsletter.

I’ve also got a lot of other things right now, with MoodleNet getting closer to a beta launch, and recently becoming a Scout Leader.

So I’m pressing pause for Lent, and have already notified the awesome people who support Thought Shrapnel via Patreon. It will be back after Easter!

Giving up Thought Shrapnel for Lent

Recently, the Slack-based book club I started has been reading Cal Newport’s Digital Minimalism. His writing made me consider giving up my smartphone for Lent as a form of ‘digital detox’. However, when I sat with the idea a while, another one replaced it: give up Thought Shrapnel for Lent instead!

Why?

Putting together Thought Shrapnel is something that I certainly enjoy doing, but something that takes me away from other things, once you’ve factored in all of the reading, writing, and curating involved in putting out several weekly posts and a newsletter.

I’ve also got a lot of other things right now, with MoodleNet getting closer to a beta launch, and recently becoming a Scout Leader.

So I’m pressing pause for Lent, and have already notified the awesome people who support Thought Shrapnel via Patreon. It will be back after Easter!

Human societies, hierarchy, and networks

Human societies and cultures are complex and messy. That means if we want to even begin to start understanding them, we need to simplify. This approach from Harold Jarche, based on David Ronfeldt’s work, is interesting:

Our current triform society is based on families/communities, a public sector, and a private market sector. But this form, dominated by Markets is unable to deal with the complexities we face globally — climate change, pollution, populism/fanaticism, nuclear war, etc. A quadriform society would be primarily guided by the Network form of organizing. We are making some advances in that area but we still have challenges getting beyond nation states and financial markets.
This diagram sums up why I find it so difficult to work within hierarchies: while they're our default form of organising, they're just not very good at dealing with complexity.

Source: Harold Jarche

The introvert's dilemma

I’m more of an ambivert (“like ambidextrous but with personality”) but I definitely feel where Jessica Hagy is coming from with this one.

Source: Indexed

Success and enthusiasm (quote)

“Success is stumbling from failure to failure with no loss of enthusiasm.”

(Winston Churchill)

Foldable displays are going to make the future pretty amazing

I was in Barcelona on Thursday and Friday last week, right before the start of Mobile World Congress. There were pop-up stores and booths everywhere, including a good-looking Samsung one on Plaça de Catalunya.

While the new five-camera Nokia 9 PureView looks pretty awesome, it’s the foldable displays that have been garnering the most attention. Check out the Huawei Mate X which has just launched at $2,600:

Huawei Mate X

Although we’ve each got one in our family, tablet sales are plummeting, as smartphones get bigger. What’s on offer here seems like exactly the kind of thing I’d use — once they’ve ironed out some of the concerns around reliability/robustness, figured out where the fingerprint sensor and cameras should go, and brought down the price. A 5-inch phone which folds out into an 8-inch tablet? Yes please!

Of course, foldable displays won’t be limited to devices we carry in our pockets. We’re going to see them pretty much everywhere — round our wrists, as part of our clothes, and eventually as ‘wallpaper’ in our houses. Eventually there won’t be a surface on the planet that won’t also potentially be a screen.

So you think you're organised?

This lengthy blog post from Stephen Wolfram, founder and CEO of Wolfram Research is not only incredible in its detail, but reveals the author’s sheer tenacity.

I’m a person who’s only satisfied if I feel I’m being productive. I like figuring things out. I like making things. And I want to do as much of that as I can. And part of being able to do that is to have the best personal infrastructure I can. Over the years I’ve been steadily accumulating and implementing “personal infrastructure hacks” for myself. Some of them are, yes, quite nerdy. But they certainly help me be productive. And maybe in time more and more of them will become mainstream, as a few already have.
Wolfram talks about how, as a "hands-on remote CEO" of an 800-person company, he prides himself on automating and streamlining as much as possible.

At an intellectual level, the key to building this infrastructure is to structure, streamline and automate everything as much as possible—while recognizing both what’s realistic with current technology, and what fits with me personally. In many ways, it’s a good, practical exercise in computational thinking, and, yes, it’s a good application of some of the tools and ideas that I’ve spent so long building. Much of it can probably be helpful to lots of other people too; some of it is pretty specific to my personality, my situation and my patterns of activity.

Wolfram has stuck with various versions of his productivity system for over 30 years. He can search across all of his emails and 100,000(!) notebooks in a single place. It's all quite impressive, really.

What’s even more impressive, though, is that he experiments with new technologies and sees if they provide an upgrade based on his organisational principles. It reminds me a bit of Clay Shirky’s response to the question of a ‘dream setup’ being that “current optimization is long-term anachronism”.

I’ve described—in arguably quite nerdy detail—how some of my personal technology infrastructure is set up. It’s always changing, and I’m always trying to update it—and for example I seem to end up with lots of bins of things I’m not using any more (yes, I get almost every “interesting” new device or gadget that I find out about).

But although things like devices change, I’ve found that the organizational principles for my infrastructure have remained surprisingly constant, just gradually getting more and more polished. And—at least when they’re based on our very stable Wolfram Language system—I’ve found that the same is true for the software systems I’ve had built to implement them.

Well worth a read. I dare you not to be impressed.

Source: Stephen Wolfram

So you think you're organised?

This lengthy blog post from Stephen Wolfram, founder and CEO of Wolfram Research is not only incredible in its detail, but reveals the author’s sheer tenacity.

I’m a person who’s only satisfied if I feel I’m being productive. I like figuring things out. I like making things. And I want to do as much of that as I can. And part of being able to do that is to have the best personal infrastructure I can. Over the years I’ve been steadily accumulating and implementing “personal infrastructure hacks” for myself. Some of them are, yes, quite nerdy. But they certainly help me be productive. And maybe in time more and more of them will become mainstream, as a few already have.
Wolfram talks about how, as a "hands-on remote CEO" of an 800-person company, he prides himself on automating and streamlining as much as possible.

At an intellectual level, the key to building this infrastructure is to structure, streamline and automate everything as much as possible—while recognizing both what’s realistic with current technology, and what fits with me personally. In many ways, it’s a good, practical exercise in computational thinking, and, yes, it’s a good application of some of the tools and ideas that I’ve spent so long building. Much of it can probably be helpful to lots of other people too; some of it is pretty specific to my personality, my situation and my patterns of activity.

Wolfram has stuck with various versions of his productivity system for over 30 years. He can search across all of his emails and 100,000(!) notebooks in a single place. It's all quite impressive, really.

What’s even more impressive, though, is that he experiments with new technologies and sees if they provide an upgrade based on his organisational principles. It reminds me a bit of Clay Shirky’s response to the question of a ‘dream setup’ being that “current optimization is long-term anachronism”.

I’ve described—in arguably quite nerdy detail—how some of my personal technology infrastructure is set up. It’s always changing, and I’m always trying to update it—and for example I seem to end up with lots of bins of things I’m not using any more (yes, I get almost every “interesting” new device or gadget that I find out about).

But although things like devices change, I’ve found that the organizational principles for my infrastructure have remained surprisingly constant, just gradually getting more and more polished. And—at least when they’re based on our very stable Wolfram Language system—I’ve found that the same is true for the software systems I’ve had built to implement them.

Well worth a read. I dare you not to be impressed.

Source: Stephen Wolfram

Blockchains: not so 'unhackable' after all?

As I wrote earlier this month, blockchain technology is not about trust, it’s about distrust. So we shouldn’t be surprised in such an environment that bad actors thrive.

Reporting on a blockchain-based currency (‘cryptocurrency’) hack, MIT Technology Review comment:

We shouldn’t be surprised. Blockchains are particularly attractive to thieves because fraudulent transactions can’t be reversed as they often can be in the traditional financial system. Besides that, we’ve long known that just as blockchains have unique security features, they have unique vulnerabilities. Marketing slogans and headlines that called the technology “unhackable” were dead wrong.
The more complicated something is, the more you have to trust technological wizards to verify something is true, then the more problems you're storing up:
But the more complex a blockchain system is, the more ways there are to make mistakes while setting it up. Earlier this month, the company in charge of Zcash—a cryptocurrency that uses extremely complicated math to let users transact in private—revealed that it had secretly fixed a “subtle cryptographic flaw” accidentally baked into the protocol. An attacker could have exploited it to make unlimited counterfeit Zcash. Fortunately, no one seems to have actually done that.
It's bad enough when people lose money through these kinds of hacks, but when we start talking about programmable blockchains (so-called 'smart contracts') then we're in a whole different territory.
smart contract is a computer program that runs on a blockchain network. It can be used to automate the movement of cryptocurrency according to prescribed rules and conditions. This has many potential uses, such as facilitating real legal contracts or complicated financial transactions. Another use—the case of interest here—is to create a voting mechanism by which all the investors in a venture capital fund can collectively decide how to allocate the money.
Human culture is dynamic and ever-changing, it's not something we should be hard-coding. And it's certainly not something we should be hard-coding based on the very narrow worldview of those who understand the intricacies of blockchain technology.

It’s particularly delicious that it’s the MIT Technology Review commenting on all of this, given that they’ve been the motive force behind Blockcerts, “the open standard for blockchain credentials” (that nobody actually needs).

Source: MIT Technology Review

Blockchains: not so 'unhackable' after all?

As I wrote earlier this month, blockchain technology is not about trust, it’s about distrust. So we shouldn’t be surprised in such an environment that bad actors thrive.

Reporting on a blockchain-based currency (‘cryptocurrency’) hack, MIT Technology Review comment:

We shouldn’t be surprised. Blockchains are particularly attractive to thieves because fraudulent transactions can’t be reversed as they often can be in the traditional financial system. Besides that, we’ve long known that just as blockchains have unique security features, they have unique vulnerabilities. Marketing slogans and headlines that called the technology “unhackable” were dead wrong.
The more complicated something is, the more you have to trust technological wizards to verify something is true, then the more problems you're storing up:
But the more complex a blockchain system is, the more ways there are to make mistakes while setting it up. Earlier this month, the company in charge of Zcash—a cryptocurrency that uses extremely complicated math to let users transact in private—revealed that it had secretly fixed a “subtle cryptographic flaw” accidentally baked into the protocol. An attacker could have exploited it to make unlimited counterfeit Zcash. Fortunately, no one seems to have actually done that.
It's bad enough when people lose money through these kinds of hacks, but when we start talking about programmable blockchains (so-called 'smart contracts') then we're in a whole different territory.
smart contract is a computer program that runs on a blockchain network. It can be used to automate the movement of cryptocurrency according to prescribed rules and conditions. This has many potential uses, such as facilitating real legal contracts or complicated financial transactions. Another use—the case of interest here—is to create a voting mechanism by which all the investors in a venture capital fund can collectively decide how to allocate the money.
Human culture is dynamic and ever-changing, it's not something we should be hard-coding. And it's certainly not something we should be hard-coding based on the very narrow worldview of those who understand the intricacies of blockchain technology.

It’s particularly delicious that it’s the MIT Technology Review commenting on all of this, given that they’ve been the motive force behind Blockcerts, “the open standard for blockchain credentials” (that nobody actually needs).

Source: MIT Technology Review

Open Badges and ADCs

As someone who’s been involved with Open Badges since 2012, I’m always interested in the ebbs and flows of the language around their promotion and use.

This article in an article on EdScoop cites a Dean at UC Irvine, who talks about ‘Alternative Digital Credentials’:

Alternative digital credentials — virtual certificates for skill verification — are an institutional imperative, said Gary Matkin, dean of continuing education at the University of California, Irvine, who predicts they will become widely available in higher education within five years.

“Like in the 90s when it was obvious that education was going to begin moving to an online format,” Matkin told EdScoop, “it is now the current progression that institutions will have to begin to issue ADCs.”

Out of all of the people I’ve spoken to about Open Badges in the past seven years, universities are the ones who least like the term ‘badges’.

The article links to a report by the International Council for Open and Distance Education (ICDE) on ADCs which cites seven reasons that they’re an ‘institutional imperative’:

  1. ADCs (and their non-university equivalents) are already widely offered
  2. Traditional transcripts are not serving the workforce. The primary failure of traditional transcripts is that they do not connect verified competencies to jobs
  3. Accrediting agencies are beginning to focus on learning outcomes
  4. Young adults are demanding shorter and more workplace-relevant learning
  5. Open education demands ADCs
  6. Hiring practices increasingly depend on digital searches
  7. An ADC ecosystem is developing
All of which seems reasonable. However, I don't necessarily agree with the report's sweeping prediction that:
"Efforts to set universal technical and quality standards for badges and to establish comprehensive repositories for credentials conforming to a single standard will not succeed."
You can't lump in quality standards with technical standards. The former is obviously doomed to fail, whereas the latter is somewhat inevitable.

Source: EdScoop

Open Badges and ADCs

As someone who’s been involved with Open Badges since 2012, I’m always interested in the ebbs and flows of the language around their promotion and use.

This article in an article on EdScoop cites a Dean at UC Irvine, who talks about ‘Alternative Digital Credentials’:

Alternative digital credentials — virtual certificates for skill verification — are an institutional imperative, said Gary Matkin, dean of continuing education at the University of California, Irvine, who predicts they will become widely available in higher education within five years.

“Like in the 90s when it was obvious that education was going to begin moving to an online format,” Matkin told EdScoop, “it is now the current progression that institutions will have to begin to issue ADCs.”

Out of all of the people I’ve spoken to about Open Badges in the past seven years, universities are the ones who least like the term ‘badges’.

The article links to a report by the International Council for Open and Distance Education (ICDE) on ADCs which cites seven reasons that they’re an ‘institutional imperative’:

  1. ADCs (and their non-university equivalents) are already widely offered
  2. Traditional transcripts are not serving the workforce. The primary failure of traditional transcripts is that they do not connect verified competencies to jobs
  3. Accrediting agencies are beginning to focus on learning outcomes
  4. Young adults are demanding shorter and more workplace-relevant learning
  5. Open education demands ADCs
  6. Hiring practices increasingly depend on digital searches
  7. An ADC ecosystem is developing
All of which seems reasonable. However, I don't necessarily agree with the report's sweeping prediction that:
"Efforts to set universal technical and quality standards for badges and to establish comprehensive repositories for credentials conforming to a single standard will not succeed."
You can't lump in quality standards with technical standards. The former is obviously doomed to fail, whereas the latter is somewhat inevitable.

Source: EdScoop

On anger (quote)

“Any person capable of angering you becomes your master. They can anger you only when you permit yourself to be disturbed by them.”

(Epictetus)

On anger (quote)

“Any person capable of angering you becomes your master. They can anger you only when you permit yourself to be disturbed by them.”

(Epictetus)

What UK children are watching (and why)

There were only 40 children as part of this Ofcom research, and (as far as I can tell) none were in the North East of England where I live. Nevertheless, as parent to a 12 year-old boy and eight year-old girl, I found the report interesting.

Key findings:
  • While some children took part in organised after school clubs at least about one a week, not many of them did other or more spontaneous activities (e.g. physically meeting friends or cultivating hobbies) on a regular basis
  • Many children used social media and other messaging platforms (e.g. chat functions in games) to continually keep in touch with their friends while at home
  • Often children described going out to meet friends face-to-face as ‘too much effort’ and preferred to spend their free time on their own at home
  • While some children managed to fit screen time around other offline interests and passions, for many, watching videos was one of the main activities taking up their spare time
  • YouTube was the most popular platform for children to consume video content, followed by Netflix. Although still present in many children’s lives, Public Service Broadcasters Video On Demand] platforms and live TV were used more rarely and seen as less relevant to children like them
  • Many parents had attempted to enforce rules about online video watching, especially with younger children. They worried that they could not effectively monitor it, as opposed to live or on-demand TV, which was usually watched on the main TV. Some were frustrated by the amount of time children were spending on personal screens.
I've recently volunteered as an Assistant Scout Leader, and last night went with Scouts and Cubs to the ice-rink in Newcastle on the train. As I'd expect, most of the 12 year-old boys had their smartphones out and most of the girls were talking to one another. The boys were playing some games, but were mostly watching YouTube videos of other people playing games. Ofcom report table

All kids with access to screen watch YouTube. Why?

  • The appeal of YouTube also appeared rooted in the characteristics of specific genres of content.
    • Some children who watched YouTubers and vloggers seemed to feel a sense of connection with them, especially when they believed that they had something in common
    • Many children liked “satisfying” videos which simulated sensory experiences
    • Many consumed videos that allowed them to expand on their interests; sometimes in conjunction to doing activities themselves, but sometimes only pursuing them by watching YouTube videos
    • These historically ‘offline’ experiences were part of YouTube’s attraction, potentially in contrast to the needs fulfilled by traditional TV.
Until I saw my son really level up his gameplay by watching YouTubers play the same games as him, I didn't really get it. There's lots of moral panic about YouTube's algorithms, but there's also a lot to celebrate with the fact that children have a bit more autonomy and control these days.
The appeal of YouTube for many of the children in the sample seemed to be that they were able to feed and advance their interests and hobbies through it. Due to the variety of content available on the platform, children were able to find videos that corresponded with interests they had spoken about enjoying offline; these included crafts, sports, drawing, music, make-up and science. Notably, in some cases, children were watching people on YouTube pursuing hobbies that they did not do themselves or had recently given up offline.
Really interesting stuff, and well worth digging into!

Source: Ofcom (via Benedict Evans)

Individual steps to tackle climate change

Tomorrow, pupils at some schools in the UK will walk out and join protests around climate change. There are none in my local area of which I’m aware, but it has got me thinking of how I talk to my own children about this.

The above infographic was created by Seth Wynes and Kimberly Nicholas and is featured in an article about the most effective steps you can take as an individual to tackle climate change.

While these are all important steps (I honestly didn’t know quite how bad transatlantic flights are!) it’s important to bear in mind that industry and big business should bear the brunt here. What they can do dwarfs what we can do individually.

Still, it all counts. And we should get on it. Time’s running out.

Source: phys.org

Games (and learning) mechanics

The average age of those who play video games? Early thirties, and rising. So, I’m happy to say that purchasing Red Dead Redemption 2 is one of the best decisions I’ve made so far in 2019.

It’s an incredible, immersive game within which you could easily lose a few hours at a time. And, just like games like Fortnite, it’s being tweaked and updated after release to improve the playing experience. Particularly the online aspect.

What interests me in particular as an educator and a technologist is the way that the designers are thinking carefully about the in-game mechanics based on what players actually do. It’s easy to theorise what people might do, but what they actually do is a constant surprise to anyone who’s ever designed something they’ve asked another person to use.

Engadget mentions one update to Red Dead that particularly jumped out at me:

The update also brings a new system that highlights especially aggressive players. The more hostile you are, the more visible you will become to other players on the map with an increasingly darkening dot. Your visibility will increase in line with bad deeds such as attacking players and their horses outside of a structured mode, free roam mission or event. But, start behaving, and your visibility will fade over time. Rockstar is also introducing the ability to parlay with an entire posse, rather than individual players, which should also help to reduce how often players are killed by trolls.
In other words, anti-social behaviour is being dealt with by games mechanics that make it harder for people to act inappropriately.

But my favourite update?

The update will also see the arrival of bounties. Any player that's overly aggressive and consistently breaks the law with have a bounty placed on their head, and once it's high enough NPC [Non-Playing Characters] bounty hunters will get on your tail. Another mechanism to dissuade griefing but perhaps a missed opportunity to allow players to become temporary bounty hunters and enact some sweet vengeance on the players that keep ruining their gameplay.
We have a tendency in education to simply ban things we don't like. That might be excluding people from courses, or ultimately from institutions. However, when it's customers at stake, games designers have a wide range of options to influence the outcomes for the positive.

I think we’ve got a lot still to learn in education from games design.

Source: Engadget


Image by BagoGames used under a CC BY license

Games (and learning) mechanics

The average age of those who play video games? Early thirties, and rising. So, I’m happy to say that purchasing Red Dead Redemption 2 is one of the best decisions I’ve made so far in 2019.

It’s an incredible, immersive game within which you could easily lose a few hours at a time. And, just like games like Fortnite, it’s being tweaked and updated after release to improve the playing experience. Particularly the online aspect.

What interests me in particular as an educator and a technologist is the way that the designers are thinking carefully about the in-game mechanics based on what players actually do. It’s easy to theorise what people might do, but what they actually do is a constant surprise to anyone who’s ever designed something they’ve asked another person to use.

Engadget mentions one update to Red Dead that particularly jumped out at me:

The update also brings a new system that highlights especially aggressive players. The more hostile you are, the more visible you will become to other players on the map with an increasingly darkening dot. Your visibility will increase in line with bad deeds such as attacking players and their horses outside of a structured mode, free roam mission or event. But, start behaving, and your visibility will fade over time. Rockstar is also introducing the ability to parlay with an entire posse, rather than individual players, which should also help to reduce how often players are killed by trolls.
In other words, anti-social behaviour is being dealt with by games mechanics that make it harder for people to act inappropriately.

But my favourite update?

The update will also see the arrival of bounties. Any player that's overly aggressive and consistently breaks the law with have a bounty placed on their head, and once it's high enough NPC [Non-Playing Characters] bounty hunters will get on your tail. Another mechanism to dissuade griefing but perhaps a missed opportunity to allow players to become temporary bounty hunters and enact some sweet vengeance on the players that keep ruining their gameplay.
We have a tendency in education to simply ban things we don't like. That might be excluding people from courses, or ultimately from institutions. However, when it's customers at stake, games designers have a wide range of options to influence the outcomes for the positive.

I think we’ve got a lot still to learn in education from games design.

Source: Engadget


Image by BagoGames used under a CC BY license

Is edtech even a thing any more?

Until recently, Craig Taylor included the following in his Twitter bio:

Dreaming of a day when we can drop the e from elearning and the m from mobile learning & just crack on.
Last week, I noticed that Stephen Downes, in reply to Scott Leslie on Mastodon, had mentioned that he didn't even think that 'e-learning' or 'edtech' was really a thing any more, so perhaps Craig dropping that from his bio was symptomatic of a wider shift?
I'm not sure anyone has any status in online learning any more. I'm wondering, maybe it's not even a discipline any more. There's learning analytics and open pedagogy and experience design, etc., but I'm not sure there's a cohesive community looking at what we used to call ed tech or e-learning.
His comments were part of a thread, so I decided not to take it out of context. However, Stephen has subsequently written his own post about it, so it's obviously something on his mind.

Reflecting on what he covers in OLDaily, he notes that, while everything definitely falls within something broadly called ‘educational technology’, there’s very few people working at that meta level — unlike, say, ten years ago:

[I]n 2019 there's no community that encompasses all of these things. Indeed, each one of these topics has not only blossomed its own community, but each one of these communities is at least as complex as the entire field of education technology was some twenty years ago. It's not simply that change is exponential or that change is moving more and more rapidly, it's that change is combinatorial - with each generation, the piece that was previously simple gets more and more complex.
I think Stephen's got what Venkatesh Rao might deem an 'elder blog':
The concept is derived from the idea of an elder game in gaming culture -- a game where most players have completed a full playthrough and are focusing on second-order play.
In other words, Stephen has spent a long time exploring and mapping the emerging territory. What's happening now, it could be argued, is that new infrastructure is emerging, but using the same territory.

So, to continue the metaphor, a new community springs up around a new bridge or tunnel, but it’s not so different from what went before. It’s more convenient, maybe, and perhaps increases capacity, but it’s not really changing the overall landscape.

So what is the value of OLDaily? I don't know. In one sense, it's the same value it always had - it's a place for me to chronicle all those developments in my field, so I have a record of them, and am more likely to remember them. And I think it's a way - as it always has been - for people who do look at the larger picture to stay literate. Not literate in the sense of "I could build an educational system from scratch" but literate in the sense of "I've heard that term before, and I know it refers to this part of the field."
I find Stephen's work invaluable. Along with the likes of Audrey Watters and Martin Weller, we need wise voices guiding us — whether or not we decide to call what we're doing 'edtech'.

Source: OLDaily

Optimise for energy and motivation

While this post has a clickbait-y subtitle (‘Why I quit a $500K job at Amazon to work for myself’) it nevertheless makes an important point:

Last week I left my cushy job at Amazon after 8 years. Despite getting rewarded repeatedly with promotions, compensation, recognition, and praise, I wasn’t motivated enough to do another year.
As author Daniel Pink points out in his book Drive: The Surprising Truth About What Motivates Us, when it comes down to it, neither money nor people saying "good job" is why we go to work. We want to do stuff that's meaningful.
What kind of work would I do if I had to do it forever? Not something that I did until I reached some milestone (an exit), but something that I would consider satisfactory if I continued to do it until I’m 80. What is out there that I could do that would make me excited waking up every day for the next 45 years that could also earn me enough money to cover my expenses? Is that too unambitious? I don’t think so.
I'm nowhere near this guy's earnings, but I do earn about twice as much as I did as a senior leader in schools. That would have been a ridiculous amount of money to me a few years ago, but you get used to it. The point is that you have to be doing something sustainable.
The things that don’t wear off are those that I’ve been doing since I was a kid, when nothing was forcing me to do them. Things such as writing code, selling my creations, charting my own path, calling it like I saw it. I know my strengths, and I know what motivates me, so why not do this all the time? I’m lucky to live in a time where I can do something independently in my area of expertise without requiring large amounts of capital or outside investors. So that’s what I’m doing.
A couple of weeks ago I volunteered to be an Assistant Scout Leader. I realised how much I missed interacting with kids of that age (over and above my own, of course) but teaching isn't the only way of going about doing that.

The interesting thing is that, if you do something you find interesting, something that gets you out of bed in the morning, the money will come at some point. I’m not naive enough to think that “follow your dreams” is good career advice, but you certainly shouldn’t be doing something you hate. Not long-term, anyway.

On that note, it’s been a delight to see how Bryan Mathers is pulling together his artistic chops (which he’s honed from zero this decade) and his coding skills to create The Remixer Machine. It seemed to come from nowhere but, of course, it’s taken skills and interests that he’s combined to make something worthwhile in the world.

So, what can you do practically if you’re reading this? Optimise for energy and motivation. In practice, that means do something that you love at a time you’ve got most energy. If you’re a morning person, do something that inspires you before work. If you’re super-motivated around lunchtime, do something in your lunch break. Night owl? You know what to do…

Source: Daniel Vassallo

Process and product of change (quote)

“To be in process of change is not an evil, any more than to be the product of change is a good.”

(Marcus Aurelius)

Tenacious will (quote)

“Your will must be tenacious, not your judgement.”

(Baltasar Gracián)

Why the internet is less weird these days

I can remember sneakily accessing the web when I was about fifteen. It was a pretty crazy place, the likes of which you only really see these days in the far-flung corners of the regular internet or on the dark web.

Back then, there were conspiracy theories, there was porn, and there was all kinds of weirdness and wonderfulness that I wouldn’t otherwise have experienced growing up in a northern mining town. Some of it may have been inappropriate, but in the main it opened my eyes to the wider world.

In this Engadget article, Violet Blue points out that the demise of the open web means we’ve also lost meaningful free speech:

It's critical... to understand that apps won, and the open internet lost. In 2013, most users accessing the internet went to mobile and stayed that way. People don't actually browse the internet anymore, and we are in a free-speech nightmare.

Because of Steve Jobs, adult and sex apps are super-banned from Apple’s conservative walled garden. This, combined with Google’s censorious push to purge its Play Store of sex has quietly, insidiously formed a censored duopoly controlled by two companies that make Morality in Media very, very happy. Facebook, even though technically a darknet, rounded it out.

A very real problem for society at the moment is that we simultaneously want to encourage free-thinking and diversity while at the same time protecting people from distasteful content. I’m not sure what the answer is, but outsourcing the decision to tech companies probably isn’t the answer.

In 1997, Ann Powers wrote an essay called "In Defense of Nasty Art." It took progressives to task for not defending rap music because it was "obscene" and sexually graphic. Powers puts it mildly when she states, "Their apprehension makes the fight to preserve freedom of expression seem hollow." This is an old problem. So it's no surprise that the same websites forbidding, banning, and blocking "sexually suggestive" art content also claim to care about free speech.
As a parent of a 12 year-old boy and eight year-old girl, I check the PEGI age ratings for the games they play. I also trust Common Sense Media to tell me about the content of films they want to watch, and I'm careful about what they can and can't access on the web.

Violet Blue’s article is a short one, so focuses on the tech companies, but the real issue here is one level down. The problem is neoliberalism. As Byung-Chul Han comments in Psychopolitics: Neoliberalism and New Technologies of Powerwhich I’m reading at the moment:

Neoliberalism represents a highly efficient, indeed an intelligent, system for exploiting freedom. Everything that belongs to practices and expressive forms of liberty –emotion, play and communication –comes to be exploited.
Almost everything is free at the point of access these days, which means, in the oft-repeated phrase, that we are the product. This means that in order to extract maximum value, nobody can be offended. I'm not so sure that I want to live in an inoffensive future.

Source: Engadget (via Noticing)

Dis-trust and blockchain technologies

Serge Ravet is a deep thinker, a great guy, and a tireless advocate of Open Badges. In the first of a series of posts on his Learning Futures blog he explains why, in his opinion, blockchain-based credentials “are the wrong solution to a false problem”.

I wouldn’t phrase things with Serge’s colourful metaphors and language inspired by his native France, but I share many of his sentiments about the utility of blockchain-based technologies. Stephen Downes commented that he didn’t like the tone of the post, with “the metaphors and imagery seem[ing] more appropriate to a junior year fraternity chat room that to a discussion of blockchain and academics”.

It’s not my job as a commentator to be the tone police, but rather to gather up the nuggets and share them with you:

My attention was recently attracted to an article describing blockchains as “distributed trust” which they are not, but makes a nice and closer to the truth acronym: dis-trust…
Blockchains are, in some circumstances, a great replacement for a centralised database. I find it difficult to get excited about that, as does Serge:
It is time for a copernican revolution, moving Blockchains from the centre of all designs to its periphery, as an accessory worth exploiting, or not. If there is a need for a database, the database doesn’t have to be distributed, if there are decisions to be made, they do not have to be left to an inflexible algorithm. On the other hand, if the design requires computer synchronisation, then blockchains might be one of the possible solutions, though not the only one.
One of the difficulties, of course, is that hype perpetuates hype. If you're a vendor and your client (or potential client) asks you a question, you'd better be ready with a positive answer:
In the current strands for European funding, knowing that the European Union has decided to establish a “European blockchain infrastructure” in 2019, who will dare not to mention blockchains in their responses to the calls for tenders? And if you are a business and a client asks “when will you have a blockchain solution” what is the response most likely to get her attention: that’s not relevant to your problem or we have a blockchain solution that just matches your needs? How to resist the blockchain mania while providing clients and investors with something that sounds like what they want to hear?
It's been four years since I first wrote about blockchain and badges. Since then, I co-founded a research project called BadgeChain, reflected on some of Serge's earlier work about a 'bit of trust', confirmed that BlockCerts and badges are friends, commented on why blockchain-based credentials are best used for high-stakes situations, written about blockchain and GDPR, called out blockchain as a futuristic integrity wand, agreed with Adam Greenfield that blockchain technologies are a stepping stone, reflected on the use of blockchain-based credentials in Higher Education, sighed about most examples of blockchain being bullshit, and explained that blockchain is about trust minimisation.

I think you can see where people like Serge and I stand on all this. It’s my considered opinion that blockchain would not have been seen as a ‘sexy’ technology if there wasn’t a huge cryptocurrency bubble attached to it.

I’ve said it before and I’ll say it again: you need to understand a technology before you add it to the ‘essential’ box for any given project. There are high-stakes use cases for blockchain-based credentials, but they’re few and far between.

Source: Learning Futures


Image adapted from one in the Public Domain

Why it's so hard to quit Big Tech

I’m writing this on a Google Pixelbook. Earlier this evening I wiped it, fully intending to install Linux on it, and then… meh. Partly, that’s because the Pixelbook now supports Linux apps in a sandboxed environment (which is great!) but mostly because using ChromeOS on decent hardware is just a lovely user experience.

Writing for TechCrunch, Danny Crichton writes:

Privacy advocates will tell you that the lack of a wide boycott against Google and particularly Facebook is symptomatic of a lack of information: if people really understood what was happening with their data, they would galvanize immediately for other platforms. Indeed, this is the very foundation for the GDPR policy in Europe: users should have a choice about how their data is used, and be fully-informed on its uses in order to make the right decision for them.
This is true for all kinds of things. If people only knew about the real cost of Brexit, about what Donald Trump was really like, about the facts of global warning... and on, and on.

I think it’s interesting to compare climate change and Big Tech. We all know that we should probably change our actions, but the symptoms only affect us directly very occasionally. I’m just pleased that I’ve been able to stay off Facebook for the last nine years…

Alternatives exist for every feature and app offered by these companies, and they are not hard to find. You can use Signal for chatting, DuckDuckGo for search, FastMail for email, 500px or Flickr for photos, and on and on. Far from being shameless clones of their competitors, in many cases these products are even superior to their originals, with better designs and novel features.
It's not good enough just to create a moral choice and talk about privacy. Just look at the Firefox web browser from Mozilla, which now stands at less than 5% market share. That's why I think that we need to be thinking about regulation (like GDPR!) to change things, not expect individual users to make some kind of stand.

I mean, just look at things like this recent article that talks about building your own computer, sideloading APK files onto an Android device with a modified bootloader, and setting up your own ‘cloud’ service. It’s do-able, and I’ve done it in the past, but it’s not fun. And it’s not a sustainable solution for 99% of the population.

Source: TechCrunch

Why it's so hard to quit Big Tech

I’m writing this on a Google Pixelbook. Earlier this evening I wiped it, fully intending to install Linux on it, and then… meh. Partly, that’s because the Pixelbook now supports Linux apps in a sandboxed environment (which is great!) but mostly because using ChromeOS on decent hardware is just a lovely user experience.

Writing for TechCrunch, Danny Crichton writes:

Privacy advocates will tell you that the lack of a wide boycott against Google and particularly Facebook is symptomatic of a lack of information: if people really understood what was happening with their data, they would galvanize immediately for other platforms. Indeed, this is the very foundation for the GDPR policy in Europe: users should have a choice about how their data is used, and be fully-informed on its uses in order to make the right decision for them.
This is true for all kinds of things. If people only knew about the real cost of Brexit, about what Donald Trump was really like, about the facts of global warning... and on, and on.

I think it’s interesting to compare climate change and Big Tech. We all know that we should probably change our actions, but the symptoms only affect us directly very occasionally. I’m just pleased that I’ve been able to stay off Facebook for the last nine years…

Alternatives exist for every feature and app offered by these companies, and they are not hard to find. You can use Signal for chatting, DuckDuckGo for search, FastMail for email, 500px or Flickr for photos, and on and on. Far from being shameless clones of their competitors, in many cases these products are even superior to their originals, with better designs and novel features.
It's not good enough just to create a moral choice and talk about privacy. Just look at the Firefox web browser from Mozilla, which now stands at less than 5% market share. That's why I think that we need to be thinking about regulation (like GDPR!) to change things, not expect individual users to make some kind of stand.

I mean, just look at things like this recent article that talks about building your own computer, sideloading APK files onto an Android device with a modified bootloader, and setting up your own ‘cloud’ service. It’s do-able, and I’ve done it in the past, but it’s not fun. And it’s not a sustainable solution for 99% of the population.

Source: TechCrunch

Let's (not) let children get bored again

Is boredom a good thing? Is there a direct link between having nothing to do and being creative? I’m not sure. Pamela Paul, writing in The New York Times, certainly thinks so:

[B]oredom is something to experience rather than hastily swipe away. And not as some kind of cruel Victorian conditioning, recommended because it’s awful and toughens you up. Despite the lesson most adults learned growing up — boredom is for boring people — boredom is useful. It’s good for you.

Paul doesn't give any evidence beyond anecdote for boredom being 'good for you'. She gives a post hoc argument stating that because someone's creative life came after (what they remembered as) a childhood punctuated by boredom, the boredom must have caused the creativity.

I don’t think that’s true at all. You need space to be creative, but that space isn’t physical, it’s mental. You can carve it out in any situation, whether that’s while watching a TV programme or staring out of a window.

For me, the elephant in the room here is the art of parenting. Not a week goes by without the media beating up parents for not doing a good enough job. This is particularly true of the bizarre concept of ‘screentime’ (something that Ian O’Byrne and Kristen Turner are investigating as part of a new project).

In the article, Paul admits that previous generations ‘underparented’. However, in her article she creates a false dichotomy between that and ‘relentless’ modern helicopter parents. Where’s the happy medium that most of us inhabit?

Only a few short decades ago, during the lost age of underparenting, grown-ups thought a certain amount of boredom was appropriate. And children came to appreciate their empty agendas. In an interview with GQ magazine, Lin-Manuel Miranda credited his unattended afternoons with fostering inspiration. “Because there is nothing better to spur creativity than a blank page or an empty bedroom,” he said.

Nowadays, subjecting a child to such inactivity is viewed as a dereliction of parental duty. In a much-read story in The Times, “The Relentlessness of Modern Parenting,” Claire Cain Miller cited a recent study that found that regardless of class, income or race, parents believed that “children who were bored after school should be enrolled in extracurricular activities, and that parents who were busy should stop their task and draw with their children if asked.”

So parents who provide for their children by enrolling them in classes and activities to explore and develop their talents are somehow doing them a disservice? I don't get it. Fair enough if they're forcing them into those activities, but I don't know too many parents who are doing that.

Ultimately, Paul and I have very different expectations and experiences of adult life. I don’t expect to be bored whether at work our out of it. There’s so much to do in the world, online and offline, that I don’t particularly get the fetishisation of boredom. To me, as soon as someone uses the word ‘realistic’, they’ve lost the argument:

But surely teaching children to endure boredom rather than ratcheting up the entertainment will prepare them for a more realistic future, one that doesn’t raise false expectations of what work or life itself actually entails. One day, even in a job they otherwise love, our kids may have to spend an entire day answering Friday’s leftover email. They may have to check spreadsheets. Or assist robots at a vast internet-ready warehouse.

This sounds boring, you might conclude. It sounds like work, and it sounds like life. Perhaps we should get used to it again, and use it to our benefit. Perhaps in an incessant, up-the-ante world, we could do with a little less excitement.

No, perhaps we should make more engaging, and provide more than bullshit jobs. Perhaps we should seek out interesting things ourselves, so that our children do likewise?

Source: The New York Times

The robot economy and social-emotional skills

Ben Williamson writes:

The steady shift of the knowledge economy into a robot economy, characterized by machine learning, artificial intelligence, automation and data analytics, is now bringing about changes in the ways that many influential organizations conceptualize education moving towards the 2020s. Although this is not an epochal or decisive shift in economic conditions, but rather a slow metamorphosis involving machine intelligence in the production of capital, it is bringing about fresh concerns with rethinking the purposes and aims of education as global competition is increasingly linked to robot capital rather than human capital alone.
A plethora of reports and pronouncements by 'thought-leaders' and think tanks warn us about a medium-term future where jobs are 'under threat'. This has a concomitant impact on education:
The first is that education needs to de-emphasize rote skills of the kind that are easy for computers to replace and stress instead more digital upskilling, coding and computer science. The second is that humans must be educated to do things that computerization cannot replace, particularly by upgrading their ‘social-emotional skills’.
A few years ago, I remember asking someone who ran different types of coding bootcamps which would be best approach for me. Somewhat conspiratorially, he told me that I didn't need to learn to code, I just needed to learn how to manage those who do the coding. As robots and AI become more sophisticated and can write their own programs, I suspect this 'management' will include non-human actors.

Of all of the things I’ve had to learn for and during my (so-called) career, the hardest has been gaining the social-emotional skills to work remotely. This isn’t an easy thing to unpack, especially when we’re all encouraged to have a ‘mission’ in life and to be emotionally invested in our work.

Williams notes:

The OECD’s Andreas Schleicher is especially explicit about the perceived strategic importance of cultivating social-emotional skills to work with artificial intelligence, writing that ‘the kinds of things that are easy to teach have become easy to digitise and automate. The future is about pairing the artificial intelligence of computers with the cognitive, social and emotional skills, and values of human beings’.

Moreover, he casts this in clearly economic terms, noting that ‘humans are in danger of losing their economic value, as biological and computer engineering make many forms of human activity redundant and decouple intelligence from consciousness’. As such, human emotional intelligence is seen as complementary to computerized artificial intelligence, as both possess complementary economic value. Indeed, by pairing human and machine intelligence, economic potential would be maximized.

[…]

The keywords of the knowledge economy have been replaced by the keywords of the robot economy. Even if robotization does not pose an immediate threat to the future jobs and labour market prospects of students today, education systems are being pressured to change in anticipation of this economic transformation.

I’m less bothered about Schleicher’s link between social-emotional skills and the robot economy. I reckon that, no matter what time period you live in, there are knowledge and skills you need to be successful when interacting with other human beings.

That being said, there are ways of interacting with machines that are important to learn to get ahead. I stand by what I said in 2013 about the importance of including computational thinking in school curricula. To me, education is about producing healthy, engaged citizens. They need to understand the world around them, be (digitally) confident in it, and have the conceptual tools to be able to problem-solve.

Source: Code Acts in Education

At the end of the day, everything in life is a 'group project'

Everything is a group project

I like to surround myself with doers, people who are happy, like me, to roll their sleeves up and get stuff done. Unfortunately, there’s plenty of people in life who seem to busy themselves with putting up roadblocks and finding ways why their participation isn’t possible.

Source: Indexed

Make art, tell a story

As detailed here, our co-op decided last week to lift our sights, expand our vision, and represent ourselves more holistically.

So when I stumbled upon Paul Jarvis' post on the importance of making art, it really chimed with me:

What makes the content you create awesome is that it’s a story told through your unique lens. It’s you, telling a story. It’s you not giving a fuck about anything but telling that story. It doesn’t matter if it’s a blog post about banking software or a video on how to make nut milk, the content will be better if you let your real personality shine.
He gives some specific tips in the short post, which is definitely worth your time.

From my point of view with Thought Shrapnel, I don’t track open rates, etc. because it means I can focus on what I’m interested in, rather than whatever I can get people to click on.

Source: Paul Jarvis

Make art, tell a story

As detailed here, our co-op decided last week to lift our sights, expand our vision, and represent ourselves more holistically.

So when I stumbled upon Paul Jarvis' post on the importance of making art, it really chimed with me:

What makes the content you create awesome is that it’s a story told through your unique lens. It’s you, telling a story. It’s you not giving a fuck about anything but telling that story. It doesn’t matter if it’s a blog post about banking software or a video on how to make nut milk, the content will be better if you let your real personality shine.
He gives some specific tips in the short post, which is definitely worth your time.

From my point of view with Thought Shrapnel, I don’t track open rates, etc. because it means I can focus on what I’m interested in, rather than whatever I can get people to click on.

Source: Paul Jarvis

Fun smartphone-based party games

At our co-op meetup last week, once we’d got business out of the way for the day, we decided to play some games. Bryan’s got a projector in his living room which he can hook up to his laptop, and he invited us all to create a Kahoot! quiz. We then played each others' quizzes, which was fun.

Back at home, I’d already introduced my two children to AirConsole, which they use to play games using their tablets as controllers. I searched for games we could play on the big screen without having to download anything and the first one we played was called Multeor. This involves each player controlling a ‘meteor’ which destroys things to collect points.

Multeor

A list I found on Reddit was also useful, although some of them are games that have to be purchased via the Steam marketplace. We played Spaceteam which, appropriately enough for our meetup describes itself  as “a cooperative shouting game for phones and tablets”. It didn’t require the project, and was great fun. I even played it with my wife when I got home!

While I’m on the subject of games, Laura introduced me to Paddle Force, which our former Mozilla colleagues Bobby Richter and Luke Pacholski created. It’s like Pong on steroids, and my children love it! Luke’s also created Pixel Drift, which reminds me a lot of playing Super Off Road at the arcades as a kid!

Fun smartphone-based party games

At our co-op meetup last week, once we’d got business out of the way for the day, we decided to play some games. Bryan’s got a projector in his living room which he can hook up to his laptop, and he invited us all to create a Kahoot! quiz. We then played each others' quizzes, which was fun.

Back at home, I’d already introduced my two children to AirConsole, which they use to play games using their tablets as controllers. I searched for games we could play on the big screen without having to download anything and the first one we played was called Multeor. This involves each player controlling a ‘meteor’ which destroys things to collect points.

Multeor

A list I found on Reddit was also useful, although some of them are games that have to be purchased via the Steam marketplace. We played Spaceteam which, appropriately enough for our meetup describes itself  as “a cooperative shouting game for phones and tablets”. It didn’t require the project, and was great fun. I even played it with my wife when I got home!

While I’m on the subject of games, Laura introduced me to Paddle Force, which our former Mozilla colleagues Bobby Richter and Luke Pacholski created. It’s like Pong on steroids, and my children love it! Luke’s also created Pixel Drift, which reminds me a lot of playing Super Off Road at the arcades as a kid!

Cal Newport on the dangers of 'techno-maximalism'

I have to say that I was not expecting to enjoy Cal Newport’s book Deep Work when I read it a couple of years ago. As someone who’s always been fascinated by technology, and who has spent most of his career working in and around it, I assume it was going to contain the approach of a Luddite working in his academic ivory tower.

It turns out I was completely wrong in this assumption, and the book was one of the best I read in 2017. Newport is back with a new book that I’ve eagerly pre-ordered called Digital Minimalism: On Living Better with Less Technology. It comes out next week. Again, the title is something that would usually be off-putting to me, but it’s hard to argue about the points that he makes in his blog posts since Deep Work.

As you would expect with a new book coming out, Newport is doing the rounds of interviews. In one with GQ magazine, he talks about the dangers of ‘digital maximalism’, which he defines in the following way:

The basic idea is that technological innovations can bring value and convenience into your life. So, you assess new technological tools with respect to what value or convenience it can bring into your life. And if you can find one, then the conclusion is, "If I can afford it, I should probably have this." It just looks at the positives. And it's view is "more is better than less," because more things that bring you benefits means more total benefits. This is what maximalism is: "If there's something that brings value, you should get it."
That type of thinking is dangerous, as:
We see these tools, and we have this narrative that, "You can do this on Facebook," or "This new feature on this device means you can do this, which would be convenient." What you don't factor in is, "Okay, well what's the cost in terms of my time attention required to have this device in my life?" Facebook might have some particular thing that's valuable, but then you have the average U.S. user spending something like 50 minutes a day on Facebook products. That's actually a pretty big [amount of life] that you're now trading in order to get whatever the potential small benefit is.

[Maximalism] ignores the opportunity cost. And as Thoreau pointed out hundreds of years ago, it’s actually in the opportunity cost that all the interesting math happens.

Newport calls for a new philosophy of technology which includes things like ‘digital minimalism’ (the subject of his new book):

Digital minimalism is a clear philosophy: you figure out what's valuable to you. For each of these things you say, "What's the best way I need to use technology to support that value?" And then you happily miss out on everything else. It's about additively building up a digital life from scratch to be very specifically, intentionally designed to make your life much better.

There might be other philosophies, just like in health in fitness. More important to me than everyone becoming a digital minimalist, is people in general getting used to this idea that, “I have a philosophy that’s really clear and grounded in my values that tells me how I approach technology.” Moving past this ad-hoc stage of like, “Whatever, I just kind of signed up for maximalist stage,” and into something a little bit more intentional.

I’ve never really the type of person to go to a book club, but what with this coming out and Company of One by Paul Jarvis arriving yesterday, perhaps I need to set up a virtual one?

Source: GQ

Staying for nothing and shrinking from nothing (quote)

“If you do the task before you always adhering to strict reason with zeal and energy and yet with humanity, disregarding all lesser ends and keeping the divinity within you pure and upright, as though you were even now faced with its recall — if you hold steadily to this, staying for nothing and shrinking from nothing, only seeking in each passing action a conformity with nature and in each word and utterance a fearless truthfulness, then shall the good life be yours. And from this course no man has the power to hold you back.”

(Marcus Aurelius)

Through the looking-glass

Earlier this month, George Dyson, historian of technology and author of books including Darwin Among the Machines, published an article at Edge.org.

In it, he cites Childhood’s End, a story by Arthur C. Clarke in which benevolent overlords arrive on earth. “It does not end well”, he says. There’s lots of scaremongering in the world at the moment and, indeed, some people have said for a few years now that software is eating the world.

Dyson comments:

The genius — sometimes deliberate, sometimes accidental — of the enterprises now on such a steep ascent is that they have found their way through the looking-glass and emerged as something else. Their models are no longer models. The search engine is no longer a model of human knowledge, it is human knowledge. What began as a mapping of human meaning now defines human meaning, and has begun to control, rather than simply catalog or index, human thought. No one is at the controls. If enough drivers subscribe to a real-time map, traffic is controlled, with no central model except the traffic itself. The successful social network is no longer a model of the social graph, it is the social graph. This is why it is a winner-take-all game. Governments, with an allegiance to antiquated models and control systems, are being left behind.

I think that’s an insightful point: human knowledge is seen to be that indexed by Google, friendships are mediated by Facebook, Twitter and Instagram, and to some extent what possible/desirable/interesting is dictated to us rather than originating from us.

We imagine that individuals, or individual algorithms, are still behind the curtain somewhere, in control. We are fooling ourselves. The new gatekeepers, by controlling the flow of information, rule a growing sector of the world.

What deserves our full attention is not the success of a few companies that have harnessed the powers of hybrid analog/digital computing, but what is happening as these powers escape into the wild and consume the rest of the world

Indeed. We need to raise our sights a little here and start asking governments to use their dwindling powers to break up mega corporations before Google, Amazon, Microsoft and Facebook are too powerful to stop. However, given how enmeshed they are in everyday life, I’m not sure at this point it’s reasonable to ask the general population to stop using their products and services.

Source: Edge.org

Surfacing popular Google Sheets to create simple web apps

I was struck by the huge potential impact of this idea from Marcel van Remmerden:

Here is a simple but efficient way to spot Enterprise Software ideas — just look at what Excel sheets are being circulated over emails inside any organization. Every single Excel sheet is a billion-dollar enterprise software business waiting to happen.
I searched "google sheet" education and "google sheet" learning on Twitter just now and, within about 30 seconds found:

Google Sheet example 1

…and:

Google Sheet example 2

…and:

Google Sheet example 3

These are all examples of things that could (and perhaps should) be simple web apps.In the article, van Remmerden explains how he created a website based on someone else’s Google Sheet (with full attribution) and started generating revenue.

It’s a little-known fact outside the world of developers that Google Sheets can serve as a very simple database for web applications. So if you’ve got an awkward web-based spreadsheet that’s being used by lots of people in your organisation, maybe it’s time to productise it?

Source: Marcel van Remmerden

Federico Leggio's type animations

These type animations by Federico Leggio, a freelance graphic designer based in Sicily, are incredible:

To inifinity and Twist New horizons

Source: Federico Leggio (via Dense Discovery)

Volume of work

This definitely speaks to me:

Quantity has a quality all its own as Lenin said. The sheer volume of your work is what works as a signal of weirdness, because anyone can be do a one-off weird thing, but only volume can signal a consistently weird production sensibility that will inspire people betting on you. The energy evident in a body of work is the most honest signal about it that makes people trust you to do things for them.
Source: Venkatesh Rao (via Tom Critchlow)

What did the web used to be like?

One of the things it’s easy to forget when you’ve been online for the last 20-plus years is that not everyone is in the same boat. Not only are there adults who never experienced the last millennium, but varying internet adoption rates mean that, for some people, centralised services like YouTube, Facebook and Twitter are synonymous with the web.

Stories are important. That’s why I appreciated this Hacker News thread with the perfect title and sub-title:

Ask HN: What was the Internet like before corporations got their hands on it? What was the Internet like in its purest form? Was it mainly information sharing, and if so, how reliable was the information?
There's lots to unpack here: corporate takeover of online spaces, veracity of information provided, and what the 'purest form' of the internet actually is/was.

Inevitably, given the readership of Hacker News, the top-voted post is technical (and slightly boastful):

1990. Not very many people had even heard of it. Some of us who'd gotten tired of wardialing and Telenet/Tymnet might have had friends in local universities who clued us in with our first hacked accounts, usually accessed by first dialing into university DECServers or X.25 networks. Overseas links from NSFNet could be as slow as 128kbit and you were encouraged to curtail your anonymous FTP use accordingly. Yes you could chat and play MUDs, but you could also hack so many different things. And admins were often relatively cool as long as you didn't use their machines as staging points to hack more things. If you got your hands on an outdial modem or x.25 gateway, you were sitting pretty sweet (until someone examined the bill and kicked you out). It really helped to be conversant in not just Unix, but also VMS, IBM VM/CMS, and maybe even Primenet. When Phrack came out, you immediately read it and removed it from your mail spool, not just because it was enormous, but because admins would see it and label you a troublemaker.

We knew what the future was, but it was largely a secret. We learned Unix from library books and honed skills on hacked accounts, without any ethical issue because we honestly felt we were preparing ourselves and others for a future where this kind of thing should be available to everyone.

We just didn’t foresee it being wirelessly available at McDonalds, for free. That part still surprises me.

I’ve already detailed my early computing history (up to 2009) for a project that asked for my input. I’ll not rehash it here, but the summary is that I got my first PC when I was 15 for Christmas 1995, and (because my parents wouldn’t let me) secretly started going online soon after.

My memory of this from an information-sharing point of view was that you had to be very careful about what you read. Because the web was smaller, and it was only the people who were really interested in getting their stuff out there who had websites, there was a lot of crazy conspiracy theories. I’m kind of glad that I went on as a reasonably-mature teenager rather than a tween.

Although I’ve very happy to be able to make my living primarily online, I suppose I feel a bit like this commenter:

This will probably come across as Get Of My Lawn type of comment. What I remember most about internet pre Facebook in particular and maybe Pre-smart phones. It was mostly a place for geeks. Geeks wrote blogs or had personal websites. Non geek stuff was more limited. It felt like a place where the geeks that were semi socially outcast kind of ran the place.

Today the internet feels like the real world where the popular people in the real world are the most popular people online. Where all the things that I felt like I escaped from on the net before I can no longer avoid.

I’m not saying that’s bad. I think it’s awesome that my non tech friends and family can connect and or share their lives and thoughts easily where as before there was a barrier to entry. I’m only pointing out that, at least for me, it changed. It was a place I liked or felt connected to or something, maybe like I was “in the know” or I can’t put my finger on it. To now where I have no such feelings.

Maybe it’s the same feeling as liking something before it’s popular and it loses that feeling of specialness once everyone else is into it. (which is probably a bad feeling to begin with)

Another commenter pointed to a short blog post he wrote on the subject, where he talks about how things were better when everyone was anonymous:

When it was anonymous, your name wasn’t attached to everything you did online. Everyone went by a handle. This means you could start a Geocities site and carve out your own niche space online, people could befriend and follow you who normally wouldn’t, and even the strangest of us found a home. All sorts of whacky, impossible things were possible because we weren’t bound by societal norms that plague our daily existence.
I get that, but I think that things that make sense and are sustainable for the few, aren't necessarily so for the many. There's nothing wrong with nostalgia and telling stories about how things used to be, but as someone who used to teach the American West, there is (for better or worse) a parallel there with the evolution of the web.

The closest place to how the web was that I currently experience is Mastodon. It’s fully of geeks, marginalised groups, and weird/wacky ideas. You’d love it.

Source: Hacker News


Old web screenshot compilation image via Vice

Hong Kong shutter art

After never having visited Barcelona before November 2017, in the subsequent 12 months following, I went there five times. One of the things that struck me was the art in the city; some municipal, some architectural, and some more vernacular (i.e. graffiti-based).

When I was in Denver a few months ago, Noah Geisel was kind enough to give me a walking tour of some of the (partly commissioned) street art there. It was incredible.

I’ve never been to Hong Kong, and am unlike to go there any time soon, but this Twitter thread of Hong Kong shutter art makes me want to!

Source: Hong Kong Hermit

True test of intelligence (quote)

"The true test of intelligence is not how much we know how to do, but how to behave when we don’t know what to do."

(John Holt)

Hierarchies and large organisations

This 2008 post by Paul Graham, re-shared on Hacker News last week, struck a chord:

What's so unnatural about working for a big company? The root of the problem is that humans weren't meant to work in such large groups.

Another thing you notice when you see animals in the wild is that each species thrives in groups of a certain size. A herd of impalas might have 100 adults; baboons maybe 20; lions rarely 10. Humans also seem designed to work in groups, and what I’ve read about hunter-gatherers accords with research on organizations and my own experience to suggest roughly what the ideal size is: groups of 8 work well; by 20 they’re getting hard to manage; and a group of 50 is really unwieldy.

I really enjoyed working at the Mozilla Foundation when it was around 25 people. By the time it got to 60? Not so much. It’s potentially different with every organisation, though, and how teams are set up.

Graham goes on to talk about how, in large organisations, people are split into teams and put into a hierarchy. That means that groups of people are represented at a higher level by their boss:

A group of 10 people within a large organization is a kind of fake tribe. The number of people you interact with is about right. But something is missing: individual initiative. Tribes of hunter-gatherers have much more freedom. The leaders have a little more power than other members of the tribe, but they don't generally tell them what to do and when the way a boss can.

[…]

[W]orking in a group of 10 people within a large organization feels both right and wrong at the same time. On the surface it feels like the kind of group you’re meant to work in, but something major is missing. A job at a big company is like high fructose corn syrup: it has some of the qualities of things you’re meant to like, but is disastrously lacking in others.

These words may come back to haunt me, but I have no desire to work in a huge organisation. I’ve seen what it does to people — and Graham seems to agree:

The people who come to us from big companies often seem kind of conservative. It's hard to say how much is because big companies made them that way, and how much is the natural conservatism that made them work for the big companies in the first place. But certainly a large part of it is learned. I know because I've seen it burn off.
Perhaps there's a happy medium? A four-day workweek gives scope to either work on a 'side hustle', volunteer, or do something that makes you happier. Maybe that's the way forward.

Source: Paul Graham

Exit option democracy

This week saw the launch of a new book by Shoshana Zuboff entitled The Age of Surveillance Capitalism: the fight for a human future at the new frontier of power. It was featured in two of my favourite newspapers, The Observer and the The New York Times, and is the kind of book I would have lapped up this time last year.

In 2019, though, I’m being a bit more pragmatic, taking heed of Stoic advice to focus on the things that you can change. Chiefly, that’s your own perceptions about the world. I can’t change the fact that, despite the Snowden revelations and everything that has come afterwards, most people don’t care one bit that they’re trading privacy for convenience..

That puts those who care about privacy in a bit of a predicament. You can use the most privacy-respecting email service in the world, but as soon as you communicate with someone using Gmail, then Google has got the entire conversation. Chances are, the organisation you work for has ‘gone Google’ too.

Then there’s Facebook shadow profiles. You don’t even have to have an account on that platform for the company behind it to know all about you. Same goes with companies knowing who’s in your friendship group if your friends upload their contacts to WhatsApp. It makes no difference if you use ridiculous third-party gadgets or not.

In short, if you want to live in modern society, your privacy depends on your family and friends. Of course you have the option to choose not to participate in certain platforms (I don’t use Facebook products) but that comes at a significant cost. It’s the digital equivalent of Thoreau taking himself off to Walden pond.

In a post from last month that I stumbled across this weekend, Nate Matias reflects on a talk he attended by Janet Vertesi at Princeton University’s Center for Information Technology Policy. Vertesi, says Matias, tried four different ways of opting out of technology companies gathering data on her:

  • Platform avoidance,
  • Infrastructural avoidance
  • Hardware experiments
  • Digital homesteading
Interestingly, the starting point is Vertesi's rejection of 'exit option democracy':
The basic assumption of markets is that people have choices. This idea that “you can just vote with your feet” is called an “exit option democracy” in organizational sociology (Weeks, 2004). Opt-out democracy is not really much of a democracy, says Janet. She should know–she’s been opting out of tech products for years.
The option Vertesi advocates for going Google-free is a pain in the backside. I know, because I've tried it:
To prevent Google from accessing her data, Janet practices “data balkanization,” spreading her traces across multiple systems. She’s used DuckDuckGo, sandstorm.io, ResilioSync, and youtube-dl to access key services. She’s used other services occasionally and non-exclusively, and varied it with open source alternatives like etherpad and open street map. It’s also important to pay attention to who is talking to whom and sharing data with whom. Data balkanization relies on knowing what companies hate each other and who’s about to get in bed with whom.
The time I've spent doing these things was time I was not being productive, nor was it time I was spending with my wife and kids. It's easy to roll your eyes at people "trading privacy for convenience" but it all adds up.

Talking of family, straying too far from societal norms has, for better or worse, negative consequences. Just as Linux users were targeted for surveillance, so Vertisi and her husband were suspected of fraud for browsing the web using Tor and using cash for transactions:

Trying to de-link your identity from data storage has consequences. For example, when Janet and her husband tried to use cash for their purchases, they faced risks of being reported to the authorities for fraud, even though their actions were legal.
And then, of course, there's the tinfoil hat options:
...Janet used parts from electronics kits to make her own 2g phone. After making the phone Janet quickly realized even a privacy-protecting phone can’t connect to the network without identifying the user to companies through the network itself.
I'm rolling my eyes at this point. The farthest I've gone down this route is use the now-defunct Firefox OS and LineageOS for microG. Although both had their upsides, they were too annoying to use for extended periods of time.

Finally, Vertesi goes down the route of trying to own all your own data. I’ll just point out that there’s a reason those of us who had huge CD and MP3 collections switched to Spotify. Looking after any collection takes time and effort. It’s also a lot more cost effective for someone like me to ‘rent’ my music instead of own it. The same goes for Netflix.

What I do accept, though, is that Vertesi’s findings show that ‘exit democracy’ isn’t really an option here, so the world of technology isn’t really democratic. My takeaway from all this, and the reason for my pragmatic approach this year, is that it’s up to governments to do something about all this.

Western society teaches us that empowered individuals can change the world. But if you take a closer look, whether it’s surveillance capitalism or climate change, it’s legislation that’s going to make the biggest difference here. Just look at the shift that took place because of GDPR.

So whether or not I read Zuboff’s new book, I’m going to continue my pragmatic approach this year. Meanwhile, I’ll continue to mute the microphone on the smart speakers in our house when they’re not being used, block trackers on my Android smartphone, and continue my monthly donations to work of the Electronic Frontier Foundation and the Open Rights Group.

Source: J. Nathan Matias

Drink Talk Learn

I’ve been to many a TeachMeet, some where alcohol has been involved. But this sounds even more fun:

Drink Talk Learn rules

Source: BuzzFeed (via Ian Usher)

 

Implicit leverage

Tyler Cowen at Marginal Revolution asks how well we understand the organisations we work with and for:

Most (not all) organizations have forms of leverage which are built in and which do not show up as debt on the balance sheet.  Banks may have off-balance sheet risk through derivatives, companies may sell off their valuable assets, and NBA teams may tank their ability to keep draft picks and free agents in their future.
In other words, every organisation has people, other organisations, or resources on which it is dependent. That can look like event organisers not alienating a sponsor, universities maintaining their brand overseas so they can continue to recruit lucrative overseas students, and organisations doing well because of a handful of individuals that win investors' trust.

When it comes to politics, of course, ‘leverage’ is almost always something problematic. In fact, we usually use the phrase ‘in the pocket of’ instead to show our opprobrium when a politician has close financial ties to, say, a tobacco company or big business.

In other words, understanding how leverage works in everyday life, business, and politics is probably something we should be teaching in schools.

Source: Marginal Revolution


Image by Mike Cohen used under a Creative Commons License

Blockchain is about trust minimisation

I’ve always laughed when people talk about ‘trust’ and blockchain. Sometimes I honestly question whether blockchain boosters live in the same world as I do; the ‘trust’ they keep on talking about is a feature of life as it currently is, not in a crypto-utopia.

Albert Wenger takes this up in an excellent recent post:

One way to tell that trust was involved in a relationship is when we discover that the person (or company, or technology) acted in a way that harmed us and benefited them. At that point we feel betrayed. This provides a useful distinction between the concepts of trust and reliance. We rely on a clock to tell time. When the clock breaks we will feel disappointed. But when we buy a clock from someone who tells us it is a working clock, we trust them and when it doesn’t work, we feel betrayed (thanks to philosopher Annette Baier for this distinction).
As I keep saying, blockchain is a really boring technology. It's super-useful for backend systems, but that's pretty much it. All of the glamour and excitement has come from speculators trying to inflate a bubble, as has happened many times before.
Now some people have been saying that crypto is exciting because it has “trust built in.” I, however, prefer a different formulation, which is that crypto systems are “trust minimized.”
Exactly. What blockchain is useful for is when you have reason to mistrust the person you're dealing with. Instead of a complex network of trust based on blood ties, friendships, and alliances, we can now perform operations and transactions in a 'trust minimised' way.
We live in a world where large corporations (especially ones with scale or network effects) have often abused trust due to a misalignment of incentives driven by short-term oriented capital markets. There are different ways of tackling this problem, including new regulation, innovative forms of ownership and trust minimized crypto systems.
So let's see blockchain for what it is: a breakthrough for international trading and compliance checking. I'm happy it exists but still, several years later, find it difficult to get too excited about. And I'll bet you all of your now-worthless Bitcoin that governments around the world will ensure that crypto-utopias turn into crypto-distopias.

Source: Continuations

Forging better habits

I’m very much looking forward to reading James Clear’s new book Atomic Habits. On his (very popular) blog, Clear shares a chapter in which he talks about the importance of using a ‘habit tracker’.

In that chapter, he states:

Habit formation is a long race. It often takes time for the desired results to appear. And while you are waiting for the long-term rewards of your efforts to accumulate, you need a reason to stick with it in the short-term. You need some immediate feedback that shows you are on the right path.
At the start of the year I started re-using a very simple app called Loop Habit Tracker. It's Android-only and available via F-Droid and Google Play, and I'm sure there's similar apps for iOS.

You can see a screenshot of what I’m tracking at the top of this post. You simply enter what you want to track, how often you want to do it, and tick off when you’ve achieved it. Not only can the app prompt you, should you wish, but you can also check out your ‘streak’.

Clear lists three ways that a habit tracker can help:

  1. It reminds you to act
  2. It motivates you to continue
  3. It provides immediate satisfaction
I find using a habit tracker a particularly effective way of upping my game. I'm realistic: I've given myself a day off every week on top of two sessions each of running, swimming, and going to the gym.

If you’re struggling to make a new habit ‘stick’, I agree with Clear that doing something like this for six weeks is a particularly effective way to kickstart your new regime!

Source: James Clear

A reminder of how little we understand the world

"The important thing in science is not so much to obtain new facts as to discover new ways of thinking about them." (William Lawrence Bragg)
Science is usually pointed to as a paradigm of cold, hard reason. But, as anyone who's ever studied the philosophy of science will attest, scientific theories — just like all human theories — are theory-laden.

This humorous xkcd cartoon is a great reminder of that.

Source: xkcd

The quixotic fools of imperialism

As an historian with an understanding of our country’s influence of the world over the last few hundred years, I look back at the British Empire with a sense of shame, not of pride.

But, even if you do flag-wave and talk about our nation’s glorious past, an article in yesterday’s New York Times shows how far we’ve falled:

The Brexiteers, pursuing a fantasy of imperial-era strength and self-sufficiency, have repeatedly revealed their hubris, mulishness and ineptitude over the past two years. Though originally a “Remainer,” Prime Minister Theresa May has matched their arrogant obduracy, imposing a patently unworkable timetable of two years on Brexit and laying down red lines that undermined negotiations with Brussels and doomed her deal to resoundingly bipartisan rejection this week in Parliament.
I think I'd forgotten how useful the word mendacious is in this context ("lying, untruthful"):

From David Cameron, who recklessly gambled his country’s future on a referendum in order to isolate some whingers in his Conservative party, to the opportunistic Boris Johnson, who jumped on the Brexit bandwagon to secure the prime ministerial chair once warmed by his role model Winston Churchill, and the top-hatted, theatrically retro Jacob Rees-Mogg, whose fund management company has set up an office within the European Union even as he vehemently scorns it, the British political class has offered to the world an astounding spectacle of mendacious, intellectually limited hustlers.

When leaving countries after their imperialist adventures, members of the British ruling elite were fond of dividing countries with arbitrary lines. Cases in point: India, Ireland,  the Middle East. That this doesn't work is blatantly obvious, and is a lazy way to deal with complex issues.
It is a measure of English Brexiteers’ political acumen that they were initially oblivious to the volatile Irish question and contemptuous of the Scottish one. Ireland was cynically partitioned to ensure that Protestant settlers outnumber native Catholics in one part of the country. The division provoked decades of violence and consumed thousands of lives. It was partly healed in 1998, when a peace agreement removed the need for security checks along the British-imposed partition line.
I'd love to think that we're nearing the end of what the Times calls 'chumocracy' and no longer have to suffer what Hannah Arendt called "the quixotic fools of imperialism". We can but hope.  

Noise cancelling for cars is a no-brainer

We’re all familiar with noise cancelling headphones. I’ve got some that I use for transatlantic trips, and they’re great for minimising any repeating background noise.

Twenty years ago, when I was studying A-Level Physics, I was also building a new PC. I realised that, if I placed a microphone inside the computer case, and fed that into the audio input on the soundcard, I could use software to invert the sound wave and thus virtually eliminate fan noise. It worked a treat.

It doesn’t surprise me, therefore, to find that BOSE, best known for its headphones, are offering car manufacturers something similar with “road noise control”:

[youtube https://www.youtube.com/watch?v=SIzkgLdzd9g&w=560&h=315]

With accelerometers, multiple microphones, and algorithms, it’s much more complicated than what I rigged up in my bedroom as a teenager. But the principle remains the same.

Source: The Next Web

Going your own way (quote)

“To go wrong in one’s own way is better than to go right in someone else’s.”

(Fyodor Dostoevsky)

Location data in old tweets

What use are old tweets? Do you look back through them? If not, then they’re only useful to others, who are able to data mine you using a new toold:

The tool, called LPAuditor (short for Location Privacy Auditor), exploits what the researchers call an "invasive policy" Twitter deployed after it introduced the ability to tag tweets with a location in 2009. For years, users who chose to geotag tweets with any location, even something as geographically broad as “New York City,” also automatically gave their precise GPS coordinates. Users wouldn’t see the coordinates displayed on Twitter. Nor would their followers. But the GPS information would still be included in the tweet’s metadata and accessible through Twitter’s API.
I deleted around 77,500 tweets in 2017 for exactly this kind of reason.

Source: WIRED

Remembering the past through photos

A few weeks ago, I bought a Google Assistant-powered smart display and put it in our kitchen in place of the DAB radio. It has the added bonus of cycling through all of my Google Photos, which stretch back as far as when my wife and I were married, 15 years ago.

This part of its functionality makes it, of course, just a cloud-powered digital photo frame. But I think it’s possible to underestimate the power that these things have. About an hour before composing this post, for example, my wife took a photo of a photo(!) that appeared on the display showing me on the beach with our two children when they were very small.

An article by Giuliana Mazzoni in The Conversation points out that our ability to whip out a smartphone at any given moment and take a photo changes our relationship to the past:

We use smart phones and new technologies as memory repositories. This is nothing new – humans have always used external devices as an aid when acquiring knowledge and remembering.

[…]

Nowadays we tend to commit very little to memory – we entrust a huge amount to the cloud. Not only is it almost unheard of to recite poems, even the most personal events are generally recorded on our cellphones. Rather than remembering what we ate at someone’s wedding, we scroll back to look at all the images we took of the food.

Mazzoni points out that this can be problematic, as memory is important for learning. However, there may be a “silver lining”:

Even if some studies claim that all this makes us more stupid, what happens is actually shifting skills from purely being able to remember to being able to manage the way we remember more efficiently. This is called metacognition, and it is an overarching skill that is also essential for students – for example when planning what and how to study. There is also substantial and reliable evidence that external memories, selfies included, can help individuals with memory impairments.

But while photos can in some instances help people to remember, the quality of the memories may be limited. We may remember what something looked like more clearly, but this could be at the expense of other types of information. One study showed that while photos could help people remember what they saw during some event, they reduced their memory of what was said.

She goes on to discuss the impact that viewing many photos from your past has on a malleable sense of self:

Research shows that we often create false memories about the past. We do this in order to maintain the identity that we want to have over time – and avoid conflicting narratives about who we are. So if you have always been rather soft and kind – but through some significant life experience decide you are tough – you may dig up memories of being aggressive in the past or even completely make them up.
I'm not so sure that it's a good thing to tell yourself the wrong story about who you are. For example, although I grew up in, and identified with, a macho ex-mining town environment, I've become happier by realising that my identify is separate to that.

I suppose it’s a bit different for me, as most of the photos I’m looking at are of me with my children and/or my wife. However, I still have to tell myself a story of who I am as a husband and a father, so in many ways it’s the same.

All in all, I love the fact that we can take photos anywhere and at any time. We may need to evolve social norms around the most appropriate ways of capturing images in crowded situations, but that’s separate to the very great benefit which I believe they bring us.

Source: The Conversation

Acoustic mirrors

On the beach at Druridge Bay in Northumberland, near where I live, there are large blocks in various intervals. These hulking pieces of concrete, now half-submerged, were deployed on seafronts up and down England to prevent the enemy successfully landing tanks during the Second World War.

I was fascinated to find out that these aren’t the only concrete blocks that protected Britain. BBC News reports that ‘acoustic mirrors’ were installed for a very specific purpose:

More than 100 years ago acoustic mirrors along the coast of England were built with the intention of using them to detect the sound of approaching German zeppelins.

The concave concrete structures were designed to pick up sound waves from enemy aircraft, making it possible to predict their flight trajectory, giving enough time for ground forces to be alerted to defend the towns and cities of Britain.

Some of these, which vary in size, still exist, and have been photographed by Joe Pettet-Smith.

The reason most of us haven’t heard of them is that the technology improved so quickly. Pettet-Smith comments:

The sound mirror experiment, this idea of having a chain of concrete structures facing the Channel using sound to detect the flight trajectory of enemy aircraft, was just that - an experiment. They tried many different sizes and designs before the project was scrapped when radar was introduced.

The science was solid, but aircraft kept getting faster and quieter, which made them obsolete.

Fascinating. The historian (and technologist) within me loves this.

Source: BBC News

Unpopular opinions on personal productivity

Before Christmas, I stumbled upon an interesting Twitter thread. It was started by Andrew Chen, General Partner at a16z, who asked:

What is your least popular but deeply held opinion on personal productivity?
He replied to his own tweet to get things started, commenting:
Being super organized is a bad thing. Means there's no room for serendipity, deep thought, can make you overly passive on other peoples' use of your time, as opposed to being focused on outbound. (Sorry to all my super Type A friends)
I'd definitely agree with that. Some of the others in the thread that I agree with are:
  • 9hour workdays are a byproduct of the industrial age. Personal productivity takes a deep fall after grinding on work for 5hours. Office hours kill personal time and productivity (@lpuchii)
  • Going on a run in the middle of the workday (@envarli)
  • Use pen and paper for scribbling notes (@uneeb123)
  • No one else has my job nor are they me, so I can’t simply follow the prescriptions of others. To be more productive, I need to look for new ideas and test. What works for someone else may be antithetical to my work. (@bguenther)
  • Great ideas rarely come from brainstorming sessions. It comes from pondering over a problem for a significant amount of time and coupling it with lots of experiments (@rajathkedi)
As ever, about half-way down the lengthy thread, it devolves into general productivity advice rather than 'unpopular opinions'. Still worth a browse!

Source: Andrew Chen (Twitter)

Confusing tech questions

Today is the first day of the Consumer Electronics Show, or CES, in Las Vegas. Each year, tech companies showcase their latest offerings and concepts. Nilay Patel, Editor-in-Chief for The Verge, comments that, increasingly, the tech industry is built on a number of assumptions about consumers and human behaviour:

[T]hink of the tech industry as being built on an ever-increasing number of assumptions: that you know what a computer is, that saying “enter your Wi-Fi password” means something to you, that you understand what an app is, that you have the desire to manage your Bluetooth device list, that you’ll figure out what USB-C dongles you need, and on and on.

Lately, the tech industry is starting to make these assumptions faster than anyone can be expected to keep up. And after waves of privacy-related scandals in tech, the misconceptions and confusion about how things works are both greater and more reasonable than ever.

I think this is spot-on. At Mozilla, and now at Moodle, I spend a good deal of my time among people who are more technically-minded than me. And, in turn, I’m more technically-minded than the general population. So what’s ‘obvious’ or ‘easy’ to developers feels like magic to the man or woman on the street.

Patel keeps track of the questions his friends and family ask him, and has listed them in the post. The number one thing he says that everyone is talking about is how people assume their phones are listening to them, and then serving up advertising based on that. They don’t get that that Facebook (and other platforms) use multiple data points to make inferences.

I’ll not reproduce his list here, but here are three questions which I, too, get a lot from friends and family:

“How do I make sure deleting photos from my iPhone won’t delete them from my computer?”

“How do I keep track of what my kid is watching on YouTube?”

“Why do I need to make another username and password?”

As I was discussing with the MoodleNet team just yesterday, there’s a difference between treating users as ‘stupid’ (which they’re not) and ensuring that they don’t have to think too much when they’re using your product.

Source: The Verge (via Orbital Operations)

Feeling good (quote)

“You can’t get much done in life if you only work on the days when you feel good.”

(Jerry West)

Creativity as an ongoing experiment

It’s hard not to be inspired by the career of the Icelandic artist Björk. She really does seem to be single-minded and determined to express herself however she chooses.

This interview with her in The Creative Independent is from 2017 but was brought to my attention recently in their (excellent) newsletter. On being asked whether it’s OK to ever abandon a project, Björk replies:

If there isn’t the next step, and it doesn’t feel right, there will definitely be times where I don’t do it. But in my mind, I don’t look at it that way. It’s more like maybe it could happen in 10 years time. Maybe it could happen in 50 years time. That’s the next step. Or somebody else will take it, somebody else will look at it, and it will inspire them to write a poem. I look at it more like that, like it’s something that I don’t own.

[…]

The minute your expectations harden or crystallize, you jinx it. I’m not saying I can always do this, but if I can stay more in the moment and be grateful for every step of the way, then because I’m not expecting anything, nothing was ever abandoned.

Creativity isn’t something that can be forced, she says:

It’s like, the moments that I’ve gone to an island, and I’m supposed to write a whole album in a month, I could never, ever do that. I write one song a month, or two months, whatever happens… If there is a happy period or if there’s a sad period, or I have all the time in the world or no time in the world, it’s just something that’s kind of a bubbling underneath.
Perhaps my favourite part of the interview, however, is where Björk says that she likes leaving things open for growth and new possibilities:
I like things when they’re not completely finished. I like it when albums come out. Maybe it’s got something to do with being in bands. We spent too long… There were at least one or two albums we made all the songs too perfect, and then we overcooked it in the studio, and then we go and play them live and they’re kind of dead. I think there’s something in me, like an instinct, that doesn’t want the final, cooked version on the album. I want to leave ends open or other versions, which is probably why I end up still having people do remixes, and when I play them live, I feel different and the songs can grow.
Well worth reading in full, especially at this time of the year when everything seems full of new possibilities!

Source: The Creative Independent (via their newsletter)

Image by Maddie

Murmurations

Starlings where I live in Northumberland, England, also swarm like this, but not in so many numbers.

I love the way that we give interesting names to groups of animals English (e.g. a ‘murder’ of crows). There’s a whole list of them on Wikipedia.

Source: The Atlantic

Fanatics (quote)

“A fanatic is one who can’t change his mind and won’t change the subject.”

(Winston Churchill)

The problem with Business schools

This article is from April 2018, but was brought to my attention via Harold Jarche’s excellent end-of-year roundup.

Business schools have huge influence, yet they are also widely regarded to be intellectually fraudulent places, fostering a culture of short-termism and greed. (There is a whole genre of jokes about what MBA – Master of Business Administration – really stands for: “Mediocre But Arrogant”, “Management by Accident”, “More Bad Advice”, “Master Bullshit Artist” and so on.) Critics of business schools come in many shapes and sizes: employers complain that graduates lack practical skills, conservative voices scorn the arriviste MBA, Europeans moan about Americanisation, radicals wail about the concentration of power in the hands of the running dogs of capital. Since 2008, many commentators have also suggested that business schools were complicit in producing the crash.
When I finished my Ed.D. my Dad jokingly (but not-jokingly) said that I should next aim for an MBA. At the time, eight years ago, I didn't have the words to explain why I had no desire to do so. Now however, understanding a little bit more about economics, and a lot more about co-operatives, I can see that the default operating system of organisations is fundamentally flawed.
If we educate our graduates in the inevitability of tooth-and-claw capitalism, it is hardly surprising that we end up with justifications for massive salary payments to people who take huge risks with other people’s money. If we teach that there is nothing else below the bottom line, then ideas about sustainability, diversity, responsibility and so on become mere decoration. The message that management research and teaching often provides is that capitalism is inevitable, and that the financial and legal techniques for running capitalism are a form of science. This combination of ideology and technocracy is what has made the business school into such an effective, and dangerous, institution.
I'm pretty sure that forming a co-op isn't on the curriculum of 99% of business schools. As Martin Parker, the author of this long article points out, after teaching in 'B-schools' for 20 years, ethical practices are covered almost reluctantly.
The problem is that business ethics and corporate social responsibility are subjects used as window dressing in the marketing of the business school, and as a fig leaf to cover the conscience of B-school deans – as if talking about ethics and responsibility were the same as doing something about it. They almost never systematically address the simple idea that since current social and economic relations produce the problems that ethics and corporate social responsibility courses treat as subjects to be studied, it is those social and economic relations that need to be changed.
So my advice to someone who's thinking of doing an MBA? Don't bother. You're not going to be learning things that make the world a better place. Save your money and do something more worthwhile. If you want to study something useful, try researching different ways of structuring organistions — perhaps starting by using this page as a portal to a Wikipedia rabbithole?

Source: The Guardian (via Harold Jarche)

Working and leading remotely

As MoodleNet Lead, I’m part of a remote team. If you look at the org chart, I’m nominally the manager of the other three members of my team, but it doesn’t feel like that (at least to me). We’re all working on our areas of expertise and mine happens to be strategy, making sure the team’s OK, and interfacing with the rest of the organisation.

I’m always looking to get better at what I do, so a ‘crash course’ for managing remote teams by Andreas Klinger piqued my interest. There’s a lot of overlap with John O’Duinn’s book on distributed teams, especially in his emphasis of the difference between various types of remote working:

There is a bunch of different setups people call “remote teams”.
  • Satellite teams
    • 2 or more teams are in different offices.
  • Remote employees
    • most of the team is in an office, but a few single employees are remote
  • Fully distributed teams
    • everybody is remote
  • Remote first teams
    • which are “basically” fully distributed
    • but have a non-critical-mass office
    • they focus on remote-friendly communication
When i speak of remote teams, i mean fully distributed teams and, if done right, remote-first teams. I consider all the other one’s hybrid setups.
Using these terms, the Open Badges team at Mozilla was 'Remote first', and when I joined Moodle I was a 'Remote employee', and now the MoodleNet team is 'Fully distributed'.

Some things are easier when you work remotely, and some things are harder. One thing that’s definitely more difficult is running effective meetings:

Everybody loves meetings… right? But especially for remote teams, they are expensive, take effort and are – frankly – exhausting.

If you are 5 people, remote team:

  • You need to announce meetings upfront
  • You need to take notes b/c not everyone needs to join
  • Be on time
  • Have a meeting agenda
  • Make sure it’s not overtime
  • Communicate further related information in slack
  • etc
[...]

And this is not only about meetings. Meetings are just a straightforward example here. It’s true for any aspect of communication or teamwork. Remote teams need 5x the process.

I’m a big believer in working openly and documenting all the things. It saves hassle, it makes community contributions easier, and it builds trust. When everything’s out in the open, there’s nowhere to hide.

Working remotely is difficult because you have to be emotionally mature to do it effectively. You’re dealing with people who aren’t physically co-present, meaning you have to over-communicate intention, provide empathy at a distance, and not over-react by reading something into a communication that wasn’t intended. This takes time and practice.

Ideally, as remote team lead, you want what Laura Thomson at Mozilla calls Minimum Viable Bureaucracy, meaning that you don’t just get your ducks in a row, you have self-organising ducks. As Klinger points out:

In remote teams, you need to set up in a way people can be as autonomously as they need. Autonomously doesn’t mean “left alone” it means “be able to run alone” (when needed).

Think of people as “fast decision maker units” and team communication as “slow input/output”. Both are needed to function efficiently, but you want to avoid the slow part when it’s not essential.

At the basis of remote work is trust. There’s no way I can see what my colleagues are doing 99% of the time while they’re working on the same project as me. The same goes for me. Some people talk about having to ‘earn’ trust, but once you’ve taken someone through the hiring process, it’s better just to give them your trust until they act in a way which makes you question it.

Source: Klinger.io (via Dense Discovery)

Rules for Online Sanity

It’s funny: we tell kids not to be mean to one another, and then immediately jump on social media to call people out and divide ourselves into various camps.

This list by Sean Blanda has been shared in several places, and rightly so. I’ve highlighted what I consider to be the top three.

I’ve started thinking about what are the “new rules” for navigating the online world? If you could get everyone to agree (implicitly or explicitly) to a set of rules, what would they be? Below is an early attempt at an “Rules for Online Sanity” list. I’d love to hear what you think I missed.

  • Reward your “enemies” when they agree with you, exhibit good behavior, or come around on an issue. Otherwise they have no incentive to ever meet you halfway.
  • Accept it when people apologize. People should be allowed to work through ideas and opinions online. And that can result in some messy outcomes. Be forgiving.
  • Sometimes people have differing opinions because they considered something you didn’t.
  • Take a second.
  • There's always more to the story. You probably don't know the full context of whatever you're reading or watching.
  • If an online space makes more money the more time you spend on it, use sparingly.
  • Judge people on their actions, not their words. Don’t get outraged over what people said. Get outraged at what they actually do.
  • Try to give people the benefit of the doubt, be charitable in how you read people’s ideas.
  • Don’t treat one bad actor as representative of whatever group or demographic they belong to.
  • Create the kind of communities and ideas you want people to talk about.
  • Sometimes, there are bad actors that don’t play by the rules. They should be shunned, castigated, and banned.
  • You don’t always have the moral high ground. You are not always right.
  • Block and mute quickly. Worry about the bubbles that creates later.
  • There but for the grace of God go you.
Oh, and about "creating communities": why not support Thought Shrapnel via Patreon and comment on these posts along with people you already know have something in common?

Source: The Discourse (via Read Write Collect)

Baseline levels of conscientiousness

Baseline levels of conscientiousness

As I mentioned on New Years' Day, I’ve decided to trade some of my privacy for convenience, and am now using the Google Assistant on a regular basis. Unlike Randall Munroe, the author of xkcd, I have no compunction about outsourcing everything other than the Very Important Things That I’m Thinking About to other devices (and other people).

Source: xkcd

The endless Black Friday of the soul

This article by Ruth Whippman appears in the New York Times, so focuses on the US, but the main thrust is applicable on a global scale:

When we think “gig economy,” we tend to picture an Uber driver or a TaskRabbit tasker rather than a lawyer or a doctor, but in reality, this scrappy economic model — grubbing around for work, all big dreams and bad health insurance — will soon catch up with the bulk of America’s middle class.

Apparently, 94% of the jobs created in the last decade are freelancer or contract positions. That's the trajectory we're on.

Almost everyone I know now has some kind of hustle, whether job, hobby, or side or vanity project. Share my blog post, buy my book, click on my link, follow me on Instagram, visit my Etsy shop, donate to my Kickstarter, crowdfund my heart surgery. It’s as though we are all working in Walmart on an endless Black Friday of the soul.

[...]

Kudos to whichever neoliberal masterminds came up with this system. They sell this infinitely seductive torture to us as “flexible working” or “being the C.E.O. of You!” and we jump at it, salivating, because on its best days, the freelance life really can be all of that.

I don't think this is a neoliberal conspiracy, it's just the logic of capitalism seeping into every area of society. As we all jockey for position in the new-ish landscape of social media, everything becomes mediated by the market.

What I think’s missing from this piece, though, is a longer-term trend towards working less. We seem to be endlessly concerned about how the nature of work is changing rather than the huge opportunities for us to do more than waste away in bullshit jobs.

I’ve been advising anyone who’ll listen over the last few years that reducing the number of days you work has a greater impact on your happiness than earning more money. Once you reach a reasonable salary, there’s diminishing returns in any case.

Source: The New York Times (via Dense Discovery)

Blockchain bullshit

I’m sure blockchain technologies are going to revolutionise some sectors. But it’s not a consumer-facing solution; its applications are mainly back-office.

Of courses a lot of the hype around blockchain came through the link between it and cryptocurrencies like Bitcoin.

There’s a very real problem here, though. People with decision-making power read predictions by consultants and marketers. Then, without understanding what the tech really is or does, ensure it’s a requirement in rendering processes. This means that vendors either have to start offering that tech, or lie about the fact that they are able to do so.

We documented 43 blockchain use-cases through internet searches, most of which were described with glowing claims like “operational costs… reduced up to 90%,” or with the assurance of “accurate and secure data capture and storage.” We found a proliferation of press releases, white papers, and persuasively written articles. However, we found no documentation or evidence of the results blockchain was purported to have achieved in these claims. We also did not find lessons learned or practical insights, as are available for other technologies in development.

We fared no better when we reached out directly to several blockchain firms, via email, phone, and in person. Not one was willing to share data on program results, MERL processes, or adaptive management for potential scale-up. Despite all the hype about how blockchain will bring unheralded transparency to processes and operations in low-trust environments, the industry is itself opaque. From this, we determined the lack of evidence supporting value claims of blockchain in the international development space is a critical gap for potential adopters.

There’s a simple lesson here: if you don’t understand something, don’t say it’s going to change the world.

Source: MERL Tech (via The Register)

Social mobility

This diagram by Jessica Hagy is a fantastic visual reminder to stay curious:

Source: Indexed

Looking back and forward in tech

Looking back at 2018, Amber Thomas commented that, for her, a few technologies became normalised over the course of the year:

  1. Phone payments
  2. Voice-controlled assistants
  3. Drones
  4. Facial recognition
  5. Fingerprints
Apart from drones, I've spent the last few years actively avoiding the above. In fact, I spent most of 2018 thinking about decentralised technology, privacy, and radical politics.

However, December is always an important month for me. I come off social media, stop blogging, and turn another year older just before Christmas. It’s a good time to reflect and think about what’s gone before, and what comes next.

Sometimes, it’s possible to identify a particular stimulus to a change in thinking. For me, it was while I was watching Have I Got News For You and the panellists were shown a photo of a fashion designer who put a shoe in front of their face to avoid being recognisable. Paul Merton asked, “doesn’t he have a passport?”

Obvious, of course, but I’d recently been travelling and using the biometric features of my passport. I’ve also relented this year and use the fingerprint scanner to unlock my phone. I realised that the genie isn’t going back in the bottle here, and that everyone else was using my data — biometric or otherwise — so I might as well benefit, too.

Long story short, I’ve bought a Google Pixelbook and Lenovo Smart Display over the Christmas period which I’ll be using in 2019 to my life easier. I’m absolutely trading privacy for convenience, but it’s been a somewhat frustrating couple of years trying to use nothing but Open Source tools.

I’ll have more to say about all of this in due course, but it’s worth saying that I’m still committed to living and working openly. And, of course, I’m looking forward to continuing to work on MoodleNet.

Source: Fragments of Amber

See you in 2019!

Thought Shrapnel will be back next year. Until then, unless you’re a supporter, that’s it for 2018.

Thanks for reading, and have a good break.

Routine and ambition (quote)

“Routine, in an intelligent man, is a sign of ambition.”

(W.H. Auden)

Is the unbundling and rebundling of Higher Education actually a bad thing?

Until I received my doctorate and joined the Mozilla Foundation in 2012, I’d spent fully 27 years in formal education. Either as a student, a teacher, or a researcher, I was invested in the Way Things Currently Are®.

Over the past six years, I’ve come to realise that a lot of the scaremongering about education is exactly that — fears about what might happen, based on not a lot of evidence. Look around; there are lot of doom-mongers about.

It was surprising, therefore, to read a remarkably balanced article in EDUCAUSE Review. Laura Czerniewicz, Director of the Centre for Innovation in Learning and Teaching (CILT), at the University of Cape Town, looks at the current state of play around the ‘unbundling’ and ‘rebundling’ of Higher Education.

Very simply, I'm using the term unbundling to mean the process of disaggregating educational provision into its component parts, very often with external actors. And I'm using the term rebundling to mean the reaggregation of those parts into new components and models. Both are happening in different parts of college and university education, and in different parts of the degree path, in every dimension and aspect—creating an extraordinarily complicated environment in an educational sector that is already in a state of disequilibrium.

Unbundling doesn’t simply happen. Aspects of the higher education experience disaggregate and fragment, and then they get re-created—rebundled—in different forms. And it’s the re-creating that is especially of interest.

Although it’s largely true that the increasing marketisation is a stimulus for the unbundling of Higher Education, I’m of the opinion that what we’re seeing has been accelerated primarily because of the internet. The end of capitalism wouldn’t necessarily remove the drive towards this unbundling and rebundling. In fact, I wonder what it would look like if it were solely non-profits, charities, and co-operatives doing this?

Czerniewicz identifies seven main aspects of Higher Education that are being unbundled:

  1. Curriculum
  2. Resources
  3. Flexible pathways
  4. Academic expertise
  5. Opportunities
    • Support
    • Credentials
    • Networks
  6. Graduateness (i.e. 'the status of being a graduate')
  7. Experience
    • Mode (e.g. online, blended)
    • Place
As a white male with a terminal degree sitting outside academia, I guess I have a great deal of privilege to check. That being said, I do (as ever) have some opinions about all of this.

As Czerniewicz points out, there isn’t anything inherently wrong with unbundling and rebundling. It’s potentially a form of creative destruction, followed by some Hegelian synthesis.

But I'd like to conclude on a hopeful note. Unbundling and rebundling can be part of the solution and can offer opportunities for reasonable and affordable access and education for all. Unbundling and rebundling are opening spaces, relationships, and opportunities that did not exist even five years ago. These processes can be harnessed and utilized for the good. We need to critically engage with these issues to ensure that the new possibilities of provision for teaching and learning can be fully exploited for democratic ends for all.
Goodness knows that, as a sector, Higher Education can do a much better job of the three main things I'd say we'd want of universities in 2018:
  • Developing well-rounded citizens ready to participate fully in democratic society.
  • Sending granular signals to the job market about the talents and competencies of individuals.
  • Enabling extremely flexible provision for those in work, or who want to take different learning pathways.
That's not even to mention universities as places of academic freedom and resistance to forms of oppression (including the State).

I think the main reason I’m interested in all of this is mainly through the lens of new forms of credentialing. Czerniewicz writes:

Certification is an equity issue. For most people, getting verifiable accreditation and certification right is at the heart of why they are invested in higher education. Credentials may prove to be the real equalizers in the world of work, but they do raise critical questions about the function and the reputation of the higher education institution. They also raise questions about value, stigma, and legitimacy. A key question is, how can new forms of credentials increase access both to formal education and to working opportunities?
I agree. So the main reason I got involved in Open Badges was that I saw the inequity as a teacher. I want, by the time our eldest child reaches the age where he's got the choice to go to university (2025), to be able to make an informed choice not to go — and still be OK. Credentialing is an arms race that I've done alright at, but which I don't really want him to be involved in escalating.

So, to conclude, I’m actually all for the unbundling and rebundling of education. As Audrey Watters has commented many times before, it all depends who is doing the rebundling. Is it solely for a profit motive? Is it improving things for the individual? For society? Who gains? Who loses?

Ultimately, this isn’t something that be particularly ‘controlled’, only observed and critiqued. No-one is secretly controlling how this is playing out worldwide. That’s not to say, though, that we shouldn’t call out and resist the worst excesses (I’m looking at you, Facebook). There’s plenty of pedagogical process we can make as this all unfolds.

Source: Educause

Credentials and standardisation

Someone pinch me, because I must be dreaming. It’s 2018, right? So why are we still seeing this kind of article about Open Badges and digital credentials?

“We do have a little bit of a Wild West situation right now with alternative credentials,” said Alana Dunagan, a senior research fellow at the nonprofit Clayton Christensen Institute, which researches education innovation. The U.S. higher education system “doesn’t do a good job of separating the wheat from the chaff.”
You'd think by now we'd realise that we have a huge opportunity to do something different here and not just replicate the existing system. Let's credential stuff that matters rather than some ridiculous notion of 'employability skills'. Open Badges and digital credentials shouldn't be just another stick to beat educational institutions.

Nor do they need to be ‘standardised’. Another person’s ‘wild west’ is another person’s landscape of huge opportunity. We not living in a world of 1950s career pathways.

“Everybody is scrambling to create microcredentials or badges,” Cheney said. “This has never been a precise marketplace, and we’re just speeding up that imprecision.”

Arizona State University, for example, is rapidly increasing the number of online courses in its continuing and professional education division, which confers both badges and certificates. According to staff, the division offers 200 courses and programs in a slew of categories, including art, history, education, health and law, and plans to provide more than 500 by next year.

My eyes are rolling out of my head at this point. Thankfully, I’ve already written about misguided notions around ‘quality’ and ‘rigour’, as well thinking through in a bit more detail what earning a ‘credential’ actually means.

Source: The Hechinger Report

Are we nearing the end of the Facebook era?

Betteridge’s law of headlines states that “any headline that ends in a question mark can be answered by the word no.” So perhaps I should have rephrased the title of this post.

However, I did find this post by Gina Bianchini interesting about what people are using instead of Facebook:

The three most obvious alternatives people are turning to are:
  1. Private Messaging Platforms. We’re already seeing people move conversations with their family and close friends to iMessage, Houseparty, Marco Polo, Telegram, Discord, and Signal for their most important relationships or interests.
  2. Vertical Social Networks and Subscription Content. Watch as time spent on The Athletic, NextDoor, Houzz, and other verticals goes up in the next year. People want to connect to content that matters to them, and the services that focus on a specific subject area will win their domain.
  3. Highly Curated, Professional-Led Podcasts, Email Newsletters, Events, and Membership Communities. The professionalization of creators and influencers will continue unabated. Emboldened by the fact that their followers are now willing to follow them to new places (and increasingly even pay for access), these emerging brands will look to own their engagement and relationships, not rent them from Facebook.
As ever, people will say that Facebook will never go away because the majority of people use it. But, as Bianchini points out, innovation happens at the edges, among the early adopters. Many of those have already moved on:
Growth halts on the edges, not the core. Facebook’s prominence is eroding as the sources of creativity and goodwill that gave it magic, substance, and cultural relevance are quietly moving on. The reality is that Facebook stopped giving creators a return on their time a long time ago.

[…]

Big brands will be the last to leave. Unlike creators and Group admins, big brands will stick with Facebook for as long as possible. Despite CPMs jumping 171% in one year, big brands have institutionalized Facebook ad buying and posting not only with budgets but with dedicated teams. They’re too invested to acknowledge the writing on the wall, despite objectively diminishing returns.

This all comes from a renewed interest in ‘quality’ time. I was particularly interested in the way Seth Godin recently talked about how the digital divide is being flipped. Bianchini concludes:

As more people become conscious of how we spend our time online, we will choose differently. We will seek to feel good about what we’re contributing and what we’re getting out of our time invested. There will emerge new safe, positive places governed not by algorithms and monolithic companies, but curated by real people who have a passion for inspiring and uplifting other human beings.
It's really interesting to see this change happening. As she says, it's not 'inevitable', but cultural differences and personal values are as important in the digital world as in the physical.

Also, as we should always remember, Facebook the company owns WhatsApp and Instagram, so they’ll be find whatever. They’ve hedged their bets as any monopoly player would do.

Source: LinkedIn

Asking Google philosophical questions

Writing in The Guardian, philosopher Julian Baggini reflects on a recent survey which asked people what they wish Google was able to answer:

The top 25 questions mostly fall into four categories: conspiracies (Who shot JFK? Did Donald Trump rig the election?); desires for worldly success (Will I ever be rich? What will tomorrow’s winning lottery numbers be?); anxieties (Do people like me? Am I good in bed?); and curiosity about the ultimate questions (What is the meaning of life? Is there a God?).
This is all hypothetical, of course, but I'm always amazed by what people type into search engines. It's as if there's some 'truth' in there, rather than just databases and algorithms. I suppose I can understand children asking voice assistants such as Alexa and Siri questions about the world, because they can't really know how the internet works.

What Baggini points out, though, is that what we type into search engines can reflect our deepest desires. That’s why they trawl the search history of suspected murderers, and why the Twitter account Theresa May Googling is so funny.

A Google search, however, cannot give us the two things we most need: time and other people. For our day-to-day problems, a sympathetic ear remains the most powerful device for providing relief, if not a cure. For the bigger puzzles of existence, there is no substitute for long reflection, with help from the great thinkers of history. Google can lead us directly to them, but only we can spend time in their company. Search results can help us only if they are the start, not the end, of our intellectual quest.
Sadly, in the face of, let's face it, pretty amazing technological innovation over the last 25 years, we've forgotten what it is that makes us human: connections. Thankfully, some more progressive tech companies are beginning to realise the importance of the Humanities — including Philosophy.

Source: The Guardian

Gamifying Wikipedia for new editors

Hands up who uses Wikipedia? OK, keep your hands up if you edit it too? Ah.

Not only does Wikipedia need our financial donations to keep running, it also needs our time. To encourage people to edit it, the Wikimedia Foundation have created an ‘adventure’ by way of orientation.

It’s split into seven stages:

  1. Say Hello to the World
  2. An Invitation to Earth
  3. Small Changes, Big Impact
  4. The Neutral Point of View
  5. The Veil of Verifiability
  6. The Civility Code
  7. Looking Good Together
It's always good to be a little playful, especially when welcoming people into a project or community. There's also an 'Interstellar Lounge' where you can chill out, listen to openly-licensed music, and get help!

Source: Wikipedia (via Scott Leslie)

Daily routine (quote)

“The secret to your success is found in your daily routine.”

(John C. Maxwell)

The many uses of autonomous vehicles

While I’m not a futurist, I am interested in predictions about the future that I didn’t expect… but, on reflection, are entirely obvious. I’m quite looking forward to (well-regulated, co-operatively owned) autonomous vehicles. I think there’s revolutionise life for the very young and very old in particular.

What I hadn’t thought about, but which a new report certainly has considered, is all of the other uses for self-driving cars:

“One of the starting points was that AVs will provide new forms of competition for hotels and restaurants. People will be sleeping in their vehicles, which has implications for roadside hotels. And people may be eating in vehicles that function as restaurant pods,” says Scott Cohen, deputy director of research of the School of Hospitality and Tourism Management at the University of Surrey in the U.K., who led the study. “That led us to think, besides sleeping, what other things will people do in cars when free from the task of driving? And you can see that in the long association of automobiles and sex that’s represented in just about every coming-of-age movie. It’s not a big leap.”
I remember talking to one taxi driver who said that he drove former footballer Alan Shearer back home to the North East from the Match of the Day studio in London. Shearer would travel overnight and sleep in the cab so that he was home for Sunday breakfast with his family. Of course, with autonomous vehicles designed for that kind of thing (and, erm, others) that would be much more comfortable.

Source: Fast Company


Image CC BY-SA Florian K

Open source is as much about culture as it is about code

The talented Abby Cabunoc Mayes, who I worked with when I was at the Mozilla Foundation (and who I caught up with briefly at MozFest), was interviewed recently by TechRepublic. I like the way she frames the Open Source movement:

I like to think the movement really came together with The Cathedral and the Bazaar, an essay by Eric Raymond. And he compared the two ideas. There's the cathedral, or free software, where a small group of people are putting together a big cathedral that anyone can come to, and attend a service or whatever. He compared that to a bazaar, where everyone is co-creating. There's no real structure, you can set up a table wherever you want. You can haggle with other people. So open source, he really compared that to the Linux foundation at the time, where he was seeing so much delegation, so many people taking on tasks that would have been closed, in the cathedral model. So that idea that anyone can get involved, and anyone can participate, is really that key. Rather than just giving away something for free.
If you do an image search for Eric Raymond, you'll find some of him holding guns, as he's an enthusiast. I don't like guns, nor do many people, but I'd like to think we can separate someone's ideas about organising from their thoughts in a different area. I know some would beg to differ.

The interviewer goes on to ask Abby what the advantages of working openly are:

There's a lot more buy-in from people. And having this distributed model, where anyone can take a part of this, and anyone can be involved in running the project, really helps keep the power not centralized, but really distributed. And so, you can see what's happening to your data. So there's a lot of advantages that way, and a lot more trust with the population. And I think this is where innovation happens. When everyone can be a part of something, and where everyone can submit the best ideas. And I think we saw that in the scientific revolution, when the academic journals started. And people were publishing their research, and then letting other people use that and build upon that and discover more things. We saw the same thing happen with open source. Where you can really take this and use and do whatever you want with it.
I think it's important to keep linking and talking about this kind of stuff. Unfortunately, I feel like our cultural default is to try and take all the credit and work in silos.

Source: TechRepublic

What are 'internet-era ways of working'?

Tom Loosemore, formerly of the UK Government Digital Service (GDS) and Co-op Digital has founded a new organisation that advises governments large public organisations.

That organisation, Public.digital, has defined ‘internet era ways of working’ which, as you’d expect, are fascinating:

  1. Design for user needs, not organisational convenience
  2. Test your riskiest assumptions with actual users
  3. The unit of delivery is the empowered, multidisciplinary team
  4. Do the hard work to make things simple
  5. Staying secure means building for resilience
  6. Recognise the duty of care you have to users, and to the data you hold about them
  7. Start small and optimise for iteration. Iterate, increment and repeat
  8. Make things open; it makes things better
  9. Fund product teams, not projects
  10. Display a bias towards small pieces of technology, loosely joined
  11. Treat data as infrastructure
  12. Digital is not just the online channel
There's a wealth of information underneath each of these, but I feel like just these top-level points should be put on a good-looking poster in (home) offices everywhere!

The only things I’d add from work smaller, but similar work I’ve done around this are:

  • Make your teams and organisation as diverse as possible
  • Ensure that your data is legible by both humans and machines
But I'm nitpicking. This is great stuff.

Source: Public.digital

What are 'internet-era ways of working'?

Tom Loosemore, formerly of the UK Government Digital Service (GDS) and Co-op Digital has founded a new organisation that advises governments large public organisations.

That organisation, Public.digital, has defined ‘internet era ways of working’ which, as you’d expect, are fascinating:

  1. Design for user needs, not organisational convenience
  2. Test your riskiest assumptions with actual users
  3. The unit of delivery is the empowered, multidisciplinary team
  4. Do the hard work to make things simple
  5. Staying secure means building for resilience
  6. Recognise the duty of care you have to users, and to the data you hold about them
  7. Start small and optimise for iteration. Iterate, increment and repeat
  8. Make things open; it makes things better
  9. Fund product teams, not projects
  10. Display a bias towards small pieces of technology, loosely joined
  11. Treat data as infrastructure
  12. Digital is not just the online channel
There's a wealth of information underneath each of these, but I feel like just these top-level points should be put on a good-looking poster in (home) offices everywhere!

The only things I’d add from work smaller, but similar work I’ve done around this are:

  • Make your teams and organisation as diverse as possible
  • Ensure that your data is legible by both humans and machines
But I'm nitpicking. This is great stuff.

Source: Public.digital

Is UBI 'hush money'?

Over the last few years, I’ve been quietly optimistic about Universal Basic Income, or ‘UBI’. It’s an approach that seems to have broad support across the political spectrum, although obviously for different reasons.

A basic income, also called basic income guarantee, universal basic income (UBI), basic living stipend (BLS), or universal demogrant, is a type of program in which citizens (or permanent residents) of a country may receive a regular sum of money from a source such as the government. A pure or unconditional basic income has no means test, but unlike Social Security in the United States it is distributed automatically to all citizens without a requirement to notify changes in the citizen's financial status. Basic income can be implemented nationally, regionally or locally. (Wikipedia)
Someone who's thinking I hugely respect, Douglas Rushkoff, thinks that UBI is a 'scam':
The policy was once thought of as a way of taking extreme poverty off the table. In this new incarnation, however, it merely serves as a way to keep the wealthiest people (and their loyal vassals, the software developers) entrenched at the very top of the economic operating system. Because of course, the cash doled out to citizens by the government will inevitably flow to them.

Think of it: The government prints more money or perhaps — god forbid — it taxes some corporate profits, then it showers the cash down on the people so they can continue to spend. As a result, more and more capital accumulates at the top. And with that capital comes more power to dictate the terms governing human existence.

I have to agree with Rushkoff when he talks about UBI leading to more passivity and consumption rather than action and ownership:

Meanwhile, UBI also obviates the need for people to consider true alternatives to living lives as passive consumers. Solutions like platform cooperatives, alternative currencies, favor banks, or employee-owned businesses, which actually threaten the status quo under which extractive monopolies have thrived, will seem unnecessary. Why bother signing up for the revolution if our bellies are full? Or just full enough?

Under the guise of compassion, UBI really just turns us from stakeholders or even citizens to mere consumers. Once the ability to create or exchange value is stripped from us, all we can do with every consumptive act is deliver more power to people who can finally, without any exaggeration, be called our corporate overlords.

Rushkoff calls UBI 'hush money', a method for keeping the masses quiet while those at the top become ever more wealthy. Unfortunately, we live in the world of the purist, where no action is good enough or pure enough in its intent. I agree with Rushkoff that we need more worker ownership of organisations, but I appreciate Noam Chomsky's view of change: you don't ignore an incremental improvement in people's lives, just because you're hoping for a much bigger one round the corner.

Source: Douglas Rushkoff

Is UBI 'hush money'?

Over the last few years, I’ve been quietly optimistic about Universal Basic Income, or ‘UBI’. It’s an approach that seems to have broad support across the political spectrum, although obviously for different reasons.

A basic income, also called basic income guarantee, universal basic income (UBI), basic living stipend (BLS), or universal demogrant, is a type of program in which citizens (or permanent residents) of a country may receive a regular sum of money from a source such as the government. A pure or unconditional basic income has no means test, but unlike Social Security in the United States it is distributed automatically to all citizens without a requirement to notify changes in the citizen's financial status. Basic income can be implemented nationally, regionally or locally. (Wikipedia)
Someone who's thinking I hugely respect, Douglas Rushkoff, thinks that UBI is a 'scam':
The policy was once thought of as a way of taking extreme poverty off the table. In this new incarnation, however, it merely serves as a way to keep the wealthiest people (and their loyal vassals, the software developers) entrenched at the very top of the economic operating system. Because of course, the cash doled out to citizens by the government will inevitably flow to them.

Think of it: The government prints more money or perhaps — god forbid — it taxes some corporate profits, then it showers the cash down on the people so they can continue to spend. As a result, more and more capital accumulates at the top. And with that capital comes more power to dictate the terms governing human existence.

I have to agree with Rushkoff when he talks about UBI leading to more passivity and consumption rather than action and ownership:

Meanwhile, UBI also obviates the need for people to consider true alternatives to living lives as passive consumers. Solutions like platform cooperatives, alternative currencies, favor banks, or employee-owned businesses, which actually threaten the status quo under which extractive monopolies have thrived, will seem unnecessary. Why bother signing up for the revolution if our bellies are full? Or just full enough?

Under the guise of compassion, UBI really just turns us from stakeholders or even citizens to mere consumers. Once the ability to create or exchange value is stripped from us, all we can do with every consumptive act is deliver more power to people who can finally, without any exaggeration, be called our corporate overlords.

Rushkoff calls UBI 'hush money', a method for keeping the masses quiet while those at the top become ever more wealthy. Unfortunately, we live in the world of the purist, where no action is good enough or pure enough in its intent. I agree with Rushkoff that we need more worker ownership of organisations, but I appreciate Noam Chomsky's view of change: you don't ignore an incremental improvement in people's lives, just because you're hoping for a much bigger one round the corner.

Source: Douglas Rushkoff

Issue [#323]: 46 hours in transit

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Nature of things (quote)

“One cannot in the nature of things expect a little tree that has been turned into a club to put forth leaves.”

(Martin Buber)

Identity is a pattern in time

When I was an undergraduate at Sheffield University, one of my Philosophy modules (quite appropriately) blew my mind. Entitled Mind, Brain and Personal Identity, it’s still being taught there, almost 20 years later.

One of the reasons for studying Philosophy is that it challenges your assumptions about the world as well as the ‘cultural programming’ of how you happened to be brought up. This particular module challenged my beliefs around a person being a single, contiguous being from birth to death.

That’s why I found this article by Esko Kilpi about workplace culture and identity particularly interesting:

There are two distinctly different approaches to understanding the individual and the social. Mainstream thinking sees the social as a community, on a different level from the individuals who form it. The social is separate from the individuals. “I” and “we” are separate things and can be understood separately.

Although he doesn’t mention it, Kilpi is actually invoking the African philosophy of Ubuntu here.

Ubuntu (Zulu pronunciation: [ùɓúntʼù]) is a Nguni Bantu term meaning "humanity". It is often translated as "I am because we are," and also "humanity towards others", but is often used in a more philosophical sense to mean "the belief in a universal bond of sharing that connects all humanity".

Instead of seeing the individual as “silent and private” and social interaction as “vocal and more public”, individuals are “thoroughly social”:

In this way of thinking, we leave behind the western notion of the self-governing, independent individual for a different notion, of interdependent people whose identities are established in interaction with each other. From this perspective, individual change cannot be separated from changes in the groups to which an individual belongs. And changes in the groups don’t take place without the individuals changing. We form our groups and our followerships and they form us at the same time, all the time.

This is why I believe in open licensing, open source, and working as openly as possible. It maximises social relationships, and helps foster individual development within those groups.

Source: Esko Kilpi

Identity is a pattern in time

When I was an undergraduate at Sheffield University, one of my Philosophy modules (quite appropriately) blew my mind. Entitled Mind, Brain and Personal Identity, it’s still being taught there, almost 20 years later.

One of the reasons for studying Philosophy is that it challenges your assumptions about the world as well as the ‘cultural programming’ of how you happened to be brought up. This particular module challenged my beliefs around a person being a single, contiguous being from birth to death.

That’s why I found this article by Esko Kilpi about workplace culture and identity particularly interesting:

There are two distinctly different approaches to understanding the individual and the social. Mainstream thinking sees the social as a community, on a different level from the individuals who form it. The social is separate from the individuals. “I” and “we” are separate things and can be understood separately.

Although he doesn’t mention it, Kilpi is actually invoking the African philosophy of Ubuntu here.

Ubuntu (Zulu pronunciation: [ùɓúntʼù]) is a Nguni Bantu term meaning "humanity". It is often translated as "I am because we are," and also "humanity towards others", but is often used in a more philosophical sense to mean "the belief in a universal bond of sharing that connects all humanity".

Instead of seeing the individual as “silent and private” and social interaction as “vocal and more public”, individuals are “thoroughly social”:

In this way of thinking, we leave behind the western notion of the self-governing, independent individual for a different notion, of interdependent people whose identities are established in interaction with each other. From this perspective, individual change cannot be separated from changes in the groups to which an individual belongs. And changes in the groups don’t take place without the individuals changing. We form our groups and our followerships and they form us at the same time, all the time.

This is why I believe in open licensing, open source, and working as openly as possible. It maximises social relationships, and helps foster individual development within those groups.

Source: Esko Kilpi

An app to close down your workday effectively

In Cal Newport’s book Deep Work, he talks about the importance of closing down your working day properly, so you can enjoy leisure time. Ovidiu Cherecheș, a developer, has built an web application called Jobs Done! to help with that:

This app is built on Cal Newport's shutdown ritual concept from his book Deep Work.

The need for a shutdown ritual comes from the following (oversimplified) reasoning:

  1. Deep focus is invaluable for producing great work
  2. We can only sustain deep focus for a limited amount of hours per day
  3. To be able to focus deeply consistently our mind requires rest (ie. complete disconnect from work) between working sessions
It makes sense to me. So here's how this app works:
You decide it's time to call it a day.

You are guided through a set of (customizable) steps meant to relieve your mind from work-related thoughts. This often involves formalizing thoughts into tasks and creating a plan for tomorrow. Each step can have one more external links attached.

Then you say a “set phrase” out loud. This step is personal so choose a set phrase you resonate with. Verbalizing your set phrase “provides a simple cue to your mind that it’s safe to release work-related thoughts for the rest of the day.”

Finally, you’re presented an array of (customizable) pastime activities you could do to disconnect.

I think this is one of those things you use to get into the habit, and then you probably don’t need after that. Worth trying!

Source: Web app / Code

An app to close down your workday effectively

In Cal Newport’s book Deep Work, he talks about the importance of closing down your working day properly, so you can enjoy leisure time. Ovidiu Cherecheș, a developer, has built an web application called Jobs Done! to help with that:

This app is built on Cal Newport's shutdown ritual concept from his book Deep Work.

The need for a shutdown ritual comes from the following (oversimplified) reasoning:

  1. Deep focus is invaluable for producing great work
  2. We can only sustain deep focus for a limited amount of hours per day
  3. To be able to focus deeply consistently our mind requires rest (ie. complete disconnect from work) between working sessions
It makes sense to me. So here's how this app works:
You decide it's time to call it a day.

You are guided through a set of (customizable) steps meant to relieve your mind from work-related thoughts. This often involves formalizing thoughts into tasks and creating a plan for tomorrow. Each step can have one more external links attached.

Then you say a “set phrase” out loud. This step is personal so choose a set phrase you resonate with. Verbalizing your set phrase “provides a simple cue to your mind that it’s safe to release work-related thoughts for the rest of the day.”

Finally, you’re presented an array of (customizable) pastime activities you could do to disconnect.

I think this is one of those things you use to get into the habit, and then you probably don’t need after that. Worth trying!

Source: Web app / Code

Immortality and Sunday afternoons (quote)

“Millions long for immortality who don’t know what to do with themselves on a rainy Sunday afternoon.”

(Susan Ertz)

Immortality and Sunday afternoons (quote)

“Millions long for immortality who don’t know what to do with themselves on a rainy Sunday afternoon.”

(Susan Ertz)

CUNY Commons in a Box OpenLab

Earlier this year, at the Open Education Global conference in Delft, I went to a session where members of staff from CUNY talked about ‘Commons in a Box’. The latest version, now referred to as ‘CBOX OpenLab’ has just been released:

CBOX OpenLab provides a powerful and flexible open alternative to costly proprietary educational platforms, allowing individual faculty members, departments, and entire institutions to easily set up an online community space designed for open learning.

Its name brings together two important ideas: openness and collaboration. Unlike closed online teaching systems, CBOX OpenLab allows members to share their work openly with one another and the world. Like a lab, it provides a space where students, faculty, and staff can work together, experiment, and innovate.

It’s effectively a WordPress plugin which transforms a vanilla install of the content management system into something that allows for collaboration in an academic context. I’m looking forward to having a play!

I had to click through several link-strewn pages to get to the meat of it, so let me just share that here, for the sake of clarity.

Sources: Announcement / Showcase / WordPress plugin

CUNY Commons in a Box OpenLab

Earlier this year, at the Open Education Global conference in Delft, I went to a session where members of staff from CUNY talked about ‘Commons in a Box’. The latest version, now referred to as ‘CBOX OpenLab’ has just been released:

CBOX OpenLab provides a powerful and flexible open alternative to costly proprietary educational platforms, allowing individual faculty members, departments, and entire institutions to easily set up an online community space designed for open learning.

Its name brings together two important ideas: openness and collaboration. Unlike closed online teaching systems, CBOX OpenLab allows members to share their work openly with one another and the world. Like a lab, it provides a space where students, faculty, and staff can work together, experiment, and innovate.

It’s effectively a WordPress plugin which transforms a vanilla install of the content management system into something that allows for collaboration in an academic context. I’m looking forward to having a play!

I had to click through several link-strewn pages to get to the meat of it, so let me just share that here, for the sake of clarity.

Sources: Announcement / Showcase / WordPress plugin

Time's brevity (quote)

“Those who make the worst use of their time are the first to complain of its brevity.”

(Jean de La Bruyère)

Issue [#322]: Back-to-back

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Openness, sharing, and choosing a CC license

The prolific Alan Levine wrote recently about licenses, and how really they’re not the be-all and end-all of sharing openly:

If we just focus on licenses and picking through the morsels of what it does and does not do, IMHO we lose sight of the bigger things about sharing our work and acknowledging the work of others as a form of gratitude, not compliance with rules.

[…]

Share for gratitude, not for rules and license terms.

I absolutely agree. The problem is, though, that people don’t know the basics. For example, sometimes I choose to credit those who share images under a CC0 licenses, sometimes not. Either way, I don’t have to, and not everyone is aware of that.

Which is why I found this infographic (itself CC BY SA 3.0) on Creative Commons licenses particularly useful:

cc-licencse-choo-choo-train

Sources: CogDogBlog / Jöran Muuß-Merholz

 

Tennessee Williams on the problems that come with success

I can’t remember now where I came across this link to a 1947 essay entitled ‘The Catastrophe of Success’ written by Tennessee Williams' for The New York Times. It’s excellent, and I’m not sure how to keep this down to my customary maximum limit of three quotations.

Williams talks about being suddenly thrust into the limelight and a life of luxury after, well, the opposite:

The sort of life that I had had previous to this popular success was one that required endurance, a life of clawing and scratching along a sheer surface and holding on tight with raw fingers to every inch of rock higher than the one caught hold of before, but it was a good life because it was the sort of life for which the human organism is created.
Staying in a 'first-class hotel suite' didn't bring him pleasure but rather made him rather depressed. He didn't feel inspired or ready to create a follow-up to his breakout play The Glass Menagerie and was rather embarrassed not only by the attention, but because he no longer had to perform any menial tasks:
I have been corrupted as much as anyone else by the vast number of menial services which our society has grown to expect and depend on. We should do for ourselves or let the machines do for us, the glorious technology that is supposed to be the new light of the world. We are like a man who has bought up a great amount of equipment for a camping trip, who has the canoe and the tent and the fishing lines and the axe and the guns, the mackinaw and the blankets, but who now, when all the preparations and the provisions are piled expertly together, is suddenly too timid to set out on the journey but remains where he was yesterday and the day before and the day before that, looking suspiciously through white lace curtains at the clear sky he distrusts. Our great technology is a God-given chance for adventure and for progress which we are afraid to attempt.
The biggest takeaway for me is the line I've highlighted below. We're meant to struggle in life. That doesn't mean a life of poverty or hardship, but it is important to struggle towards something, particularly in creative endeavours:
One does not escape that easily from the seduction of an effete way of life. You cannot arbitrarily say to yourself, I will not continue my life as it was before this thing, Success, happened to me. But once you fully apprehend the vacuity of a life without struggle you are equipped with the basic means of salvation. Once you know this is true, that the heart of man, his body and his brain, are forged in a white-hot furnace for the purpose of conflict (the struggle of creation) and that with the conflict removed, the man is a sword cutting daisies, that not privation but luxury is the wolf at the door and that the fangs of this wolf are all the little vanities and conceits and laxities that Success is heir to—-why, then with this knowledge you are at least in a position of knowing where danger lies.
So, yes, the 'catastrophe' of success.

Source: Genius.com

What would you do if you were the richest man in the world? Now you can find out!

This is simultaneously amusing and horrifying:

A simple text-based adventure exploring the age-old question: What would you do if you had more money than any single human being should ever have?
It's a text-based adventure game that gives you options as the richest man on earth, while educating you on how that money was amassed, and the scale of what would be possible with that kind of wealth.

Source: You Are Jeff Bezos

Configuring your iPhone for productivity (and privacy, security?)

At an estimated read time of 70 minutes, though, this article is the longest I’ve seen on Medium! It includes a bunch of advice from ‘Coach Tony’, the CEO of Coach.me, about how he uses his iPhone, and perhaps how you should too:

The iPhone could be an incredible tool, but most people use their phone as a life-shortening distraction device.

However, if you take the time to follow the steps in this article you will be more productive, more focused, and — I’m not joking at all — live longer.

Practically every iPhone setup decision has tradeoffs. I will give you optimal defaults and then trust you to make an adult decision about whether that default is right for you.

As an aside, I appreciate the way he sets up different ways to read the post, from skimming the headlines through to reading the whole thing in-depth.

However, the problem is that for a post that the author describes as a ‘very very complete’ guide to configuring your iPhone to ‘work for you, not against you’, it doesn’t go into enough depth about privacy and security for my liking. I’m kind of tired of people thinking that using a password manager and increasing your lockscreen password length is enough.

For example, Coach Tony talks about basically going all-in on Google Cloud. When people point out the privacy concerns of doing this, he basically uses the tinfoil hat defence in response:

Moving to the Google cloud does trade privacy for productivity. Google will use your data to advertise to you. However, this is a productivity article. If you wish it were a privacy article, then use Protonmail. Last, it’s not consistent that I have you turn off Apple’s ad tracking while then making yourself fully available to Google’s ad tracking. This is a tradeoff. You can turn off Apple’s tracking with zero downside, so do it. With Google, I think it’s worthwhile to use their services and then fight ads in other places. The Reader feature in Safari basically hides most Google ads that you’d see on your phone. On your computer, try an ad blocker.
It's all very well saying that it's a productivity article rather than a privacy article. But it's 2018, you need to do both. Don't recommend things to people that give them gains in one area but causes them new problems in others.

That being said, I appreciate Coach Tony’s focus on what I would call ‘notification literacy’. Perhaps read his article, ignore the bits where he suggests compromising your privacy, and follow his advice on configuring your device for a calmer existence.

 

Source: Better Humans

Time flies (quote)

The bad news is time flies. The good news is you’re the pilot.

(Michael Altshuler)

Designing calm products

As I mentioned on last week’s TIDE podcast, recorded live in the Lake District, this article from Amber Case about designing calm products is really useful:

Making a good product is an important responsibility, especially if the product is close enough to someone that it can be the difference between life and death. Even though the end result might by calm, designing a calm, human-centered product requires some anxiety and perfectionism from everyone on the team, not just the designer.

She's designed a Calm Design quiz, gives a score card for your product. As the quiz applicable to every kind of product, not just apps, it has questions that you can skip over if they're not relevant — e.g. whether the products has physical buttons with a blue screen.

It’s a clever way to package up design principles, I think. For example, without reading her book, and over and above regular accessibility guidelines, I learned that the following might be good for MoodleNet:

  • Stable interfaces
  • Grouping frequently used icons
  • Allowing users to prominently display favourite commands
  • Turning Notifications off by default (except the most important ones)
  • Plain-language privacy policy
  • Allow export of user data at any time
  • Include different notification types based on importance
  • Maintain some functionality even without internet connection
It's a great approach, and it would be very interesting to score some of most favourite (and least favourite) products. For example, as I said to Dai during the podcast when we discussed this, my Volvo V60's driver display would score pretty highly.

Source: Amber Case

Wishing and planning (quote)

“It takes as much energy to wish as it does to plan.”

(Eleanor Roosevelt)

Is planning just guessing?

Eylan Ezekiel pointed to this post on the Signal v. Noise blog recently on our Slack channel. The CEO of Basecamp, Jason Fried, points out that most business ‘planning’ is simply guesswork:

So next time you’re working on a business plan, call it a business guess. And that financial plan? It’s a financial guess. Strategic planning? Call it with it really is: a strategic guess. 5 year plan? You mean 5 year guess.

There’s nothing wrong with guessing, dreaming, or predicting, but it’s not planning. Planning’s too definite a term for most things. We often use planning when we really mean guessing. And what we call it has a lot to do with how we think about it, do about it, and devote to it. I think companies often over think, over do, and over devote to planning.

I can't believe that people still even attempt five-year plans. It didn't work for Stalin; it won't work for you!

The reason I’m particularly receptive to this at the moment is that I need to be thinking what happens after we launch the first version of MoodleNet. I could make confident assertions, but actually I don’t know. It depends on the feedback we get from users!

I’m always a little suspicious of people who come across like they’ve got it all figured out. Life is messy. This post respects that.

Source: Signal v. Noise

Is planning just guessing?

Eylan Ezekiel pointed to this post on the Signal v. Noise blog recently on our Slack channel. The CEO of Basecamp, Jason Fried, points out that most business ‘planning’ is simply guesswork:

So next time you’re working on a business plan, call it a business guess. And that financial plan? It’s a financial guess. Strategic planning? Call it with it really is: a strategic guess. 5 year plan? You mean 5 year guess.

There’s nothing wrong with guessing, dreaming, or predicting, but it’s not planning. Planning’s too definite a term for most things. We often use planning when we really mean guessing. And what we call it has a lot to do with how we think about it, do about it, and devote to it. I think companies often over think, over do, and over devote to planning.

I can't believe that people still even attempt five-year plans. It didn't work for Stalin; it won't work for you!

The reason I’m particularly receptive to this at the moment is that I need to be thinking what happens after we launch the first version of MoodleNet. I could make confident assertions, but actually I don’t know. It depends on the feedback we get from users!

I’m always a little suspicious of people who come across like they’ve got it all figured out. Life is messy. This post respects that.

Source: Signal v. Noise

Issue [#321]: Small talk and tiny conferences

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Absorb what is useful (quote)

“Absorb what is useful. Discard what is not. Add what is uniquely your own.”

(Bruce Lee)

Decentralisation and networked agency

I came to know of Ton Zylstra through some work I did with Jeroen de Boer and the Bibliotheekservice Fryslân team in the Netherlands last year. While I haven’t met Zylstra in person, I’m a fan of his ideas.

In a recent post he talks about the problems of generic online social networks:

Discourse disintegrates I think specifically when there’s no meaningful social context in which it takes place, nor social connections between speakers in that discourse. The effect not just stems from that you can’t/don’t really know who you’re conversing with, but I think more importantly from anyone on a general platform being able to bring themselves into the conversation, worse even force themselves into the conversation. Which is why you never should wade into newspaper comments, even though we all read them at times because watching discourse crumbling from the sidelines has a certain addictive quality. That this can happen is because participants themselves don’t control the setting of any conversation they are part of, and none of those conversations are limited to a specific (social) context.
Although he goes on to talk about federation, it's his analysis of the current problem that I'm particularly interested in here. He mentions in passing some work that he's done on 'networked agency', a term that could be particularly useful. It's akin to Nassim Nicholas Taleb's notion of 'skin in the game'.

Zylstra writes:

Unlike in your living room, over drinks in a pub, or at a party with friends of friends of friends. There you know someone. Or if you don’t, you know them in that setting, you know their behaviour at that event thus far. All have skin in the game as well misbehaviour has immediate social consequences. Social connectedness is a necessary context for discourse, either stemming from personal connections, or from the setting of the place/event it takes place in. Online discourse often lacks both, discourse crumbles, entropy ensues. Without consequence for those causing the crumbling. Which makes it fascinating when missing social context is retroactively restored, outing the misbehaving parties, such as the book I once bought by Tinkebell where she matches death threats she received against the sender’s very normal Facebook profiles.
What we're building with MoodleNet is very intentionally focused on communities who come together to collectively curate and build. I think it's set to be a very different environment from what we've (unfortunately) come to expect from social networks such as Twitter and Facebook.

Source: Ton Zylstra

Are tiny conferences and meetups better than big ones?

Over a decade ago, a few Scottish educators got together in a pub for a meetup. This spawned something that is still going to this day: the TeachMeet. I’ve been to a fair few in my time and, particularly in the early days, found them the perfect mix of camaraderie and professional learning.

Does the size of the event matter? I think it probably does. While you can absolutely learn a lot at much larger events that carefully curated such as MoodleMoots, there’s nothing like events of fewer than one hundred people getting together. If it’s less than fifty, even better.

I’ve been reminded of this thanks to a post on ‘tiny conferences’ that I found via Hacker News:

I find that I get so much more value and enjoyment from conferences with less than 30 people than I do from most of the 200+ attendee conferences I’ve been to. Don’t get me wrong, there are some excellent, well-run, “real” business conferences with plenty value.

But if I compare and evaluate them based on this criteria: “Did I get what I wanted out of this trip?” … “Will my business benefit because I went?” … “Did I have fun and enjoy my time there?” … “Would I go again?”, then I choose Tiny Confs every time.

The author of the post gives eight pointers for running a successful ‘Tiny Conf’:

  1. Keep it 'tiny'
  2. Make it application and invite-only
  3. Pick a fun location with an activity
  4. 'Sessions' not 'talks'
  5. Plan everything in advance
  6. Manage the money
  7. Keep in touch before, during and after the trip
  8. You do you!
There's some solid advice in there. It actually reminded me of the MountainMoot I went to earlier this year, which ticked all of these boxes. It was a great event, and one that I'll remember for a long time!

At this time of political upheaval and social media burnout, it might be nice to even call this kind of thing a ‘retreat’? I’d certainly be attracted to go something like that.

Source: Brian Casel


Update: Thanks to Mags Amond who mentioned CongRegation which looks excellent!

Small talk and sociability

I admit it, I’m not amazing at what’s often referred to as ‘small talk’. I’m getting better, though, perhaps because I currently live in a row of terraced houses containing people of all ages. Small snippets of conversation about the weather, general health, and relatives are the lubricant of social situations.

The Finns, however, forgo such small talk. It’s not in their culture.

Finnish people often forgo the conversational niceties that are hard-baked into other cultures, and typically don’t see the need to meet foreign colleagues, tourists and friends in the middle.

[…]

“It’s not about the structure or features of the language, but rather the ways in which people use the language to do things,” she explained via email. “For instance, the ‘how are you?’ question that is most often placed in the very beginning of an encounter. In English-speaking countries, it is mostly used just as a greeting and no serious answer is expected to it. On the contrary, the Finnish counterpart (Mitä kuuluu?) can expect a ‘real’ answer after it: quite often the person responding to the question starts to tell how his or her life really is at the moment, what’s new, how they have been doing.”

This article explores whether the Finns need to adapt to the rest of the world, or vice-versa. Interesting stuff!

Source: BBC Travel

Issue [#320]: The power of appreciation

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

The majority (quote)

"Whenever you find yourself on the side of the majority, it is time to pause and reflect."

(Mark Twain)

Co-operation and anti-social punishment in different societies

I find this absolutely fascinating. It turns out that some societies actively ‘punish’ those who engage in collaborative and co-operative ventures:

Social contributions over time (with punishment)

The tragedy of the commons is already well-documented, showing that commonly-owned resources end up suffering if people can free-ride without consequences. The above chart, however, shows that in some cultures, there being a consequence for that free-riding leads to contribution (e.g. Boston, Copenhagen). In others, it makes no difference (e.g. Riyadh, Athens).

Herrmann, Thöni and Gächter speculate that the anti-social punishment may be a form of revenge. You've punished me for free-riding so now I'll punish you just that you know how it feels! And given that I don't know who the punisher was, I'll punish all the cooperators who were likely to administer the original punishment in the first place.
I'm less interested in the graphs and the 'hard' science than the anecdotal aspects of this post. The author is from Slovakia, and comments:
To get back to Eastern Europe, we've used to live under communist regime where all the common causes were appropriated by the state. Any gains from a contribution to a common cause would silently disappear somewhere in the dark corners of the bureaucracy.

Quite the opposite: People felt justified to take stuff from the commons. We even had a saying: “If you don’t steal [from the common property] you are stealing from your family.”

At the same time, stealing from the state was, legally, a crime apart and it was ranked in severity somewhere in the vicinity of murder. You could get ten years in jail if they’ve caught you.

Unsurprisingly, in such an environment, reporting to authorities (i.e. “pro-social punishment”) was regarded as highly unjust — remember the coffee cup example! — and anti-social and there was a strict taboo against it. Ratting often resulted in social ostracism (i.e. “anti-social punishment”). We can still witness that state of affairs in the highly offensive words used to refer to the informers: “udavač”, “donášač”, “práskač”, “špicel”, “fízel” (roughly: “nark”, “rat”, “snoop”, “stool pigeon”).

A perfect example of how the state can cause the co-operation to thrive or dwindle based on governmental policy.

Source: LessWrong

How do people learn?

I was looking forward to digging into a new book from the US National Academies Press, which is freely downloadable in return for a (fake?) email address:

There are many reasons to be curious about the way people learn, and the past several decades have seen an explosion of research that has important implications for individual learning, schooling, workforce training, and policy.

In 2000, How People Learn: Brain, Mind, Experience, and School: Expanded Edition was published and its influence has been wide and deep. The report summarized insights on the nature of learning in school-aged children; described principles for the design of effective learning environments; and provided examples of how that could be implemented in the classroom.

Since then, researchers have continued to investigate the nature of learning and have generated new findings related to the neurological processes involved in learning, individual and cultural variability related to learning, and educational technologies. In addition to expanding scientific understanding of the mechanisms of learning and how the brain adapts throughout the lifespan, there have been important discoveries about influences on learning, particularly sociocultural factors and the structure of learning environments.

How People Learn II: Learners, Contexts, and Cultures provides a much-needed update incorporating insights gained from this research over the past decade. The book expands on the foundation laid out in the 2000 report and takes an in-depth look at the constellation of influences that affect individual learning. How People Learn II will become an indispensable resource to understand learning throughout the lifespan for educators of students and adults.

Thankfully, Stephen Downes has created a slide-based overview of the key points for easier consumption!

How People Learn from Stephen Downes

It would have been great if he’d used different images rather than the same one on every slide, but it’s still helpful.   Source: National Academies / OLDaily

Reappropriating the artifacts of late-stage capitalism

During our inter-railing adventure this summer, we visited Zurich in Switzerland. In one of the parks there, we came across a dockless scooter, which we promptly unlocked and had a great time zooming around.

As you’d expect, the greatest density of dockless bikes and scooters — devices that don’t have to be picked up or returned in any specific place — is in San Francisco. It seems that, in their attempts to flood the city and gain some kind of competitive advantage, VC-backed dockless bike and scooter startups are having an unintended effect. They’re helping homeless people move around the city more easily:

Hoarding and vandalism aren't the only problems for electric scooter companies. There's also theft. While the vehicles have GPS tracking, once the battery fully dies they go off the app's map.

“Every homeless person has like three scooters now,” [Michael Ghadieh, who owns electric bicycle shop, SF Wheels] said. “They take the brains out, the logos off and they literally hotwire it.”

I’ve seen scooters stashed at tent cities around San Francisco. Photos of people extracting the batteries have been posted on Twitter and Reddit. Rumor has it the batteries have a resale price of about $50 on the street, but there doesn’t appear to be a huge market for them on eBay or Craigslist, according to my quick survey.

Source: CNET (via BoingBoing)

Venture beyond the expected (quote)

“The easiest route to take is to glide in the direction of wherever fate pushes. But living at the mercy of circumstance makes you a passive participant in your own story. Without a fight against fate (aka the status quo), you’ll never venture beyond the expected.”

(Scott Belsky)

Myths about children and digital technologies

Prof. Sonia Livingstone has written a link-filled post relating to a panel she’s on at the Digital Families 2018 conference. In it, she talks about six myths around children in the digital age:

  1. Children are ‘digital natives’ and know it all.
  2. Parents are ‘digital immigrants’ and don’t know anything.
  3. Time with media is time wasted compared with ‘real’ conversation or playing outside.
  4. Parents’ role is to monitor, restrict and ban because digital risks greatly outweigh digital opportunities.
  5. Children don’t care about their privacy online.
  6. Media literacy is THE answer to the problems of the digital age.
Good stuff, and the post and associated links are well worth checking out.

Source: Parenting for a Digital Future

Myths about children and digital technologies

Prof. Sonia Livingstone has written a link-filled post relating to a panel she’s on at the Digital Families 2018 conference. In it, she talks about six myths around children in the digital age:

  1. Children are ‘digital natives’ and know it all.
  2. Parents are ‘digital immigrants’ and don’t know anything.
  3. Time with media is time wasted compared with ‘real’ conversation or playing outside.
  4. Parents’ role is to monitor, restrict and ban because digital risks greatly outweigh digital opportunities.
  5. Children don’t care about their privacy online.
  6. Media literacy is THE answer to the problems of the digital age.
Good stuff, and the post and associated links are well worth checking out.

Source: Parenting for a Digital Future

GAFA: time to 'ignore and withdraw'?

Last week, Motherboard reported that an unannounced update by Apple meant that third-party repairs of products such as the MacBook Pro would be impossible:

Apple has introduced software locks that will effectively prevent independent and third-party repair on 2018 MacBook Pro computers, according to internal Apple documents obtained by Motherboard. The new system will render the computer “inoperative” unless a proprietary Apple “system configuration” software is run after parts of the system are replaced.
As they have updated the story to state, iFixit did some testing and found that this 'kill switch' hasn't been activated - yet.

To me, it further reinforced why I love and support in very practical ways, Open Source Software (OSS). I use OSS, and I’m working on it in my day-to-day professional life. Sometimes, however, we don’t do a good enough job of explaining why it’s important. For me, the Apple story is a terrifying example of other people deciding when you should upgrade and/or stop using something.

Another example from this week: Google have announced that they’re shutting down their social network, Google+. It’s been a long-time coming, but it was only last month that, due to the demise of Path, my family was experimenting with Google+ as somewhere to which we could have jumped ship.

Both Apple’s products and Google+ are proprietary. You can’t see the source code. You can’t inspect it for bugs or security leaks. And the the latter is actually why Google decided to close down their service. That, and the fact it only had 500,000 users, most of whom were spending less than five seconds per visit.

So, what can we do in the face of huge companies such as Google, Amazon, Facebook, and Apple (GAFA)? After all, they’ve got, for all intents and purposes, almost unlimited money and power. Well, we can and should vote for politicians to apply regulatory pressure on them. But, more practically, we can ignore and withdraw from these companies. They’re not trillion-dollar companies just because they’re offering polished products. They’re rich because they’re finding ever more elaborate ways to apply sneaky ways to achieve vendor lock-in.

This affects the technology purchases that we make, but it also has an effect on the social networks we use. As is becoming clear, the value that huge multi-national companies such as Google and Facebook gain from offering services for ‘free’ vastly outstrips the amount of money they spend on providing them. With Google+ shutting down, and Facebook’s acquisition of Instagram and WhatsApp, the number of options for social networking seems to be getting ever-smaller. Sadly, our current antitrust and monopoly regulations haven’t been updated to deal with this.

So what can we do? I’ve been using Mastodon in earnest since May 2017. It’s a decentralised social network, meaning that anyone can set up their own ‘instance’ and communicate with everyone else running the same OSS. Most of the time, people join established instances, whether because the instance is popular, or it fits with their particular interests. Recently, however, I’ve noticed people setting up an instance just for themselves.

At first, I thought this was a quirky and slightly eccentric thing to do. It seemed like the kind of thing that tech-literate people do just because they can. But then, I read a post by Laura Kalbag where she explained her reasoning:

Everything I post is under my control on my server. I can guarantee that my Mastodon instance won’t start profiling me, or posting ads, or inviting Nazis to tea, because I am the boss of my instance. I have access to all my content for all time, and only my web host or Internet Service Provider can block my access (as with any self-hosted site.) And all blocking and filtering rules are under my control—you can block and filter what you want as an individual on another person’s instance, but you have no say in who/what they block and filter for the whole instance.

You can also make custom emoji for your own Mastodon instance that every other instance can see and/or share.

Ton Zylstra is another person who has blogged about running his own instance. It would seem that this is a simple thing to do using a service such as masto.host.

Of course, many people reading this will think so what? And, perhaps, that seems like a whole lot of hassle. Maybe so. I hope it’s not hyperbolic to say so, but for me, I see all of this as being equivalent to climate change. It’s something that we all know we need to do something about but, for most of us, it’s just too much hassle to think about what could happen in future.

I, for one, hope that we’re not looking back from (a very hot) year 2050 regretting the choices we made in 2018.

Graceful conduct (quote)

“Graceful conduct is the chief ornament of life; it gets you out of any tight situation.”

(Baltasar Gracián)

Issue [#319]: Operation Twilight

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Issue [#319]: Operation Twilight

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Example and opinion (quote)

“The world is changed by your example, not by your opinion.”

(Paolo Coelho)

Insidious Instagram influencers?

There seems to a lot of pushback at the moment against the kind of lifestyle that’s a direct result of the Silicon Valley mindset. People are rejecting everything from the Instagram ‘influencer’ approach to life to the ‘techbro’-style crazy working hours.

This week saw Basecamp, a company that prides itself on the work/life balance of its employees and on rejecting venture capital, publish another book. You can guess at what it focuses on from its title, It doesn’t have to be crazy at work. I’ve enjoyed and have recommended their previous books (as ‘37 Signals’), and am looking forward to reading this latest one.

Alongside that book, I’ve seen three articles that, to me at least, are all related to the same underlying issues. The first comes from Simone Stolzoff who writes in Quartz at Work that we’re no longer quite sure what we’re working for:

Before I became a journalist, I worked in an office with hot breakfast in the mornings and yoga in the evenings. I was #blessed. But I would reflect on certain weeks—after a string of days where I was lured in before 8am and stayed until well after sunset—like a driver on the highway who can’t remember the last five miles of road. My life had become my work. And my work had become a series of rinse-and-repeat days that started to feel indistinguishable from one another.

Part of this lack of work/life balance comes from our inability these days to simply have hobbies, or interests, or do anything just for the sake of it. As Tim Wu points out in The New York Times, it's all linked some kind of existential issue around identity:
If you’re a jogger, it is no longer enough to cruise around the block; you’re training for the next marathon. If you’re a painter, you are no longer passing a pleasant afternoon, just you, your watercolors and your water lilies; you are trying to land a gallery show or at least garner a respectable social media following. When your identity is linked to your hobby — you’re a yogi, a surfer, a rock climber — you’d better be good at it, or else who are you?
To me, this is inextricably linked to George Monbiot's recent piece in The Guardian about about the problem of actors being interviewed about the world's issues disproportionately more often than anybody else. As a result, we're rewarding those people who look like they know what they're talking about with our collective attention, rather than those who actually do. Monbiot concludes:
The task of all citizens is to understand what we are seeing. The world as portrayed is not the world as it is. The personification of complex issues confuses and misdirects us, ensuring that we struggle to comprehend and respond to our predicaments. This, it seems, is often the point.
There's always been a difference between appearance and reality in public life. However, previously, at least they seem to have been two faces of the same coin. These days, our working lives as well as our public lives seem to be

Sources: Basecamp / Quartz at Work / The New York Times / The Guardian

 

The end of 'meritocracy' at Mozilla

A couple of years ago, I wrote a post explaining how appeals to ‘meritocracy’ are problematic, particularly in education. The world is not a neutral place and meritocracy can actually entrench privilege.

I’m glad to see, therefore, that Mozilla have decided to stop using the term:

“Meritocracy” was widely adopted as a best practice among open source projects in the founding days of the movement: it appeared to speak to collaboration amongst peers and across organizational boundaries. 20 years later,  we understand that this concept was practiced in a world characterized by both hidden bias and outright abuse. The notion of “meritocracy” can often obscure bias and can help perpetuate a dominant culture. Meritocracy does not consider the reality that tech does not operate on a level playing field.
Source: Mozilla Stands for Inclusion

The end of 'meritocracy' at Mozilla

A couple of years ago, I wrote a post explaining how appeals to ‘meritocracy’ are problematic, particularly in education. The world is not a neutral place and meritocracy can actually entrench privilege.

I’m glad to see, therefore, that Mozilla have decided to stop using the term:

“Meritocracy” was widely adopted as a best practice among open source projects in the founding days of the movement: it appeared to speak to collaboration amongst peers and across organizational boundaries. 20 years later,  we understand that this concept was practiced in a world characterized by both hidden bias and outright abuse. The notion of “meritocracy” can often obscure bias and can help perpetuate a dominant culture. Meritocracy does not consider the reality that tech does not operate on a level playing field.
Source: Mozilla Stands for Inclusion

Is Google becoming more like Facebook?

I’m composing this post on ChromeOS, which is a little bit hypocritical, but yesterday I was shocked to discover how much data I was ‘accidentally’ sharing with Google. Check it out for yourself by going to your Google account’s activity controls page.

This article talks about how Google have become less trustworthy of late:

[Google] announced a forthcoming update last Wednesday: Chrome’s auto-sign-in feature will still be the default behavior of Chrome. But you’ll be able to turn it off through an optional switch buried in Chrome’s settings.

This pattern of behavior by tech companies is so routine that we take it for granted. Let’s call it “pulling a Facebook” in honor of the many times that Facebook has “accidentally” relaxed the privacy settings for user profile data, and then—following a bout of bad press coverage—apologized and quietly reversed course. A key feature of these episodes is that management rarely takes the blame: It’s usually laid at the feet of some anonymous engineer moving fast and breaking things. Maybe it’s just a coincidence that these changes consistently err in the direction of increasing “user engagement” and never make your experience more private.

What’s new here, and is a very recent development indeed, is that we’re finally starting to see that this approach has costs. For example, it now seems like Facebook executives spend an awful lot of time answering questions in front of Congress. In 2017, when Facebook announced it had handed more than 80 million user profiles to the sketchy election strategy firm Cambridge Analytica, Facebook received surprisingly little sympathy and a notable stock drop. Losing the trust of your users, we’re learning, does not immediately make them flee your business. But it does matter. It’s just that the consequences are cumulative, like spending too much time in the sun.

I'm certainly questioning my tech choices. And I've (re-)locked down my Google account.

Source: Slate

Bullshit receptivity scale

I love academia. Apparently researchers in psychology are using ‘hyperactive agency detection’ and a ‘Bullshit Receptivity Scale’ in their work to describe traits found in human subjects. It’s particularly useful when researching the tendency of people to believe in conspiracy theories, apparently:

Participants’ receptivity to superficially profound statements was measured using the Bullshit Receptivity Scale (Pennycook et al., 2015). This measure consists of nine seemingly impressive statements that follow rules of syntax and contain fancy words, but do not have any intentional meaning (e.g., “Wholeness quiets infinite phenomena”; “Imagination is inside exponential space time events”). Participants rated each of the items’ profoundness on a scale from 1 (Not at all profound) to 5 (Very profound). They were given the following definition of profound for reference: “of deep meaning; of great and broadly inclusive significance.”

[…]

To measure participants’ tendency to attribute intent to events, we asked them to interpret the actions portrayed by animated shapes (Abell, Happé, & Frith, 2000), a series of videos lasting from thirty seconds to one minute depicting two triangles whose actions range from random (e.g., bumping around the screen following a geometric pattern) to resembling complex social interactions (e.g., one shape “bullying” the other). These animations were originally designed to detect deficits in the development of theory of mind.

I’ve no idea about the validity of the conclusions in this particular study (especially as it doesn’t seem to be peer-reviewed yet) but I always like discovering terms that provide a convenient shorthand.

For example, I can imagine exclaiming that someone is “off the Bullshit Receptivity Scale!” or has “hyperactive agency detection”. Nice.

Source: SSRN (via Pharyngula)

Listen well (quote)

“To listen well, is as powerful a means of influence as to talk well, and is as essential to all true conversation.”

(Chinese Proverb)

Seven coaching questions

Eylan Ezekiel shared this article in the Slack channel we hang out in most days. It’s a useful set of questions for when you’re in a coaching situation — which could be in sports, at work, when teaching, or even parenting:

  1. “What’s on your mind?”
  2. “And what else?”
  3. “What’s the real challenge here for you?”
  4. “What do you want?”
  5. “How can I help?”
  6. “If you say yes to this, what must you say no to?”
  7. “What was most useful or most valuable here for you?”
Source: Huffington Post

Issue [#318]: Blisters a-go-go

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Why desk jobs are exhausting

Sitting, apparently, is the new smoking. That’s one of the reasons I bought a standing desk, meaning that most days, I’m working while upright. Knowledge work, however, whether sitting or standing is tiring.

Why is that? This article reports on a study that may have an answer.

Here’s the topline result: There was no correlation between the amount of physical work the nurses did and their feelings of fatigue. “In some people, physical activity is fatiguing,” Derek Johnston, the Aberdeen University psychologist who led the study, says. “But in other people, it is energizing.” The study also found that the nurses’ subjective sense of how demanding their job was of them was not correlated with fatigue either.

Instead, they found this small correlation: The nurses who were least likely to feel fatigued from their work also felt the most in control of their work, and the most rewarded for it. These feelings may have boosted their motivation, which may have boosted their perception of having energy.

Source: Vox

Microshifts are more effective than epiphanies

Interesting article about how to change your long-term behaviours. I’ve managed to stop biting my nails (I know, I know), become pescetarian, and largely give up drinking coffee through similar advice:

Any habit you want to build takes practice, and the recognition that you’re not going to accomplish it immediately. Whether it’s saving more money, or running a few miles, or learning about classical music, you’re not going to experience a dramatic shift and suddenly have $10,000 socked away, or be able to run a marathon, or know Mozart’s entire catalogue. But if you’re dedicated and commit yourself to something over a long period, microshifts will get you where you want to go.
Source: Brianna Wiest (via Lifehacker)

An incorrect approach to teaching History

My thanks to Amy Burvall for bringing to my attention this article about how we’re teaching History incorrectly. Its focus is on how ‘fact-checking’ is so different with the internet than it was beforehand. There’s a lot of similarities between what the interviewee, Sam Wineburg, has to say and what Mike Caulfield has been working on with Web Literacy for Student Fact-Checkers:

Fact-checkers know that in a digital medium, the web is a web. It’s not just a metaphor. You understand a particular node by its relationship in a web. So the smartest thing to do is to consult the web to understand any particular node. That is very different from reading Thucydides, where you look at internal criticism and consistency because there really isn’t a documentary record beyond Thucydides.

Source: Slate

Cory Doctorow on Big Tech, monopolies, and decentralisation

I’m not one to watch a 30-minute video, as usually it’s faster and more interesting to read the transcription. I’ll always make an exception, however, for Cory Doctorow who not only speaks almost as fast as I can read, but is so enthusiastic and passionate about his work that it’s a lot more satisfying to see him speak.

You have to watch his keynote at the Decentralized Web Summit last month. It’s not only a history lesson and a warning, but he puts in ways that really make you see what the problem is. Inspiring stuff.

Source: Boing Boing

Airbnb wants to give out shares to its superhosts

Note: I’m testing shorter, more to-the-point updates, alongside the regular ones. Let me know what you think in the comments!


Airbnb sent a letter to the SEC asking for the regulator to permit offering equity to hosts. Airbnb primarily supported changes to Securities Act Rule 701 that would allow offering shares to gig economy workers, not just investors and staff. CEO Brian Chesky characterized it as vital to rewarding the company's supporters.

[…]

This isn’t the first time a gig-oriented online service has petitioned the SEC. Uber met with the Commission more than once to ask about the possibility. Airbnb is pushing for a direct policy change, however, where Uber was more interested in how it could offer shares under the existing framework.

Source: Engadget

Experimenting with turning on comments for a week

Hello Thought Shrapnel readers! Some of you have asked over the last few months why the ability to comment on posts is switched off here.

Well, that’s mainly because I noticed a general downwards trend in the quality of online comments. For example, people would share their opinions on my blog posts without reading more than the title, or just link to their own stuff. And then there’s the perennial problem of spam.

This week I’m going to run an experiment and leave comments open. Everything I post from today to the end of the week you’ll be able to comment on directly.

I like the approach that Dan Meyer takes on his blog with adding ‘featured comments’ to his posts after the fact. I may try that.

Let’s see how it goes…


Image by clement127 used under a Creative Commons license

Issue [#317]: The Path to better social networks

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Why badge endorsement is a game-changer

Since starting work with Moodle, I’ve been advocating for upgrading its Open Badges implementation to v2.0. It’s on the horizon, thankfully. The reason I’m particularly interested in this is endorsement, the value of which is explained in a post by Don Presant:

What’s so exciting about Endorsement, you may ask. Well, for one thing, it promises to resolve recurring questions about the “credibility of badges” by providing third party validation that can be formal (like accreditation) or informal (“fits our purpose”). Endorsement can also strengthen collaboration, increase portability and encourage the development of meaningful badge ecosystems.
I've known Don for a number of years and have been consistently impressed by combination of idealism and pragmatism. He provides a version of Open Badge Factory in Canada called 'CanCred' and, under these auspices, is working on a project around a Humanitarian Passport.
Endorsement of organisations is now being embedded into the DNA of HPass, the international humanitarian skills recognition network now in piloting, scheduled for public launch in early 2019. Organisations who can demonstrate audited compliance with the HPass Standards for Learning or Assessment Providers will become “HPass Approved” on the system, a form of accreditation that will be signposted with Endorsement metadata baked into their badges and a distinctive visual quality mark they can display on their badge images. This is an example of a formal “accreditation-like” endorsement, but HPass badges can also be endorsed informally by peer organisations.
The ultimate aim of alternative credentialing such as Open Badges is recognition, and I think that the ability to endorse badges is a big step forward towards that.

Source: Open Badge Factory

Internalising the logic of social media

A few days ago, Twitter posted a photo of an early sketch that founder Jack Dorsey made for the initial user interface. It included settings to inform a user’s followers that they might not respond immediately because they were in the part or busy reading.

A day later, an article in The New Yorker about social media used a stark caption for its header image:

Social-media platforms know what you’re seeing, and they know how you acted in the immediate aftermath of seeing it, and they can decide what you will see next.
There's no doubt in my mind that we're like slow-boiled frogs when it comes to creeping dystopia. It's not happening through the totalitarian lens of the 20th century, but instead in a much more problematic way.
One of the more insidious aspects of [social media's business] model is the extent to which we, as social-media users, replicate its logic at the level of our own activity: we perform market analysis of our own utterances, calculating the reaction a particular post will generate and adjusting our output accordingly. Negative emotions like outrage and contempt and anxiety tend to drive significantly more engagement than positive ones.
No wonder Twitter's such an angry place these days.

The article quotes James Bridle’s book New Dark Age, a book which is sitting waiting for me on my shelf when I get back home from this work trip.

We find ourselves today connected to vast repositories of knowledge and yet we have not learned to think. In fact, the opposite is true: that which was intended to enlighten the world in practice darkens it. The abundance of information and the plurality of worldviews now accessible to us through the internet are not producing a coherent consensus reality, but one riven by fundamentalist insistence on simplistic narratives, conspiracy theories, and post-factual politics. It is on this contradiction that the idea of a new dark age turns: an age in which the value we have placed upon knowledge is destroyed by the abundance of that profitable commodity, and in which we look about ourselves in search of new ways to understand the world.
This resonates with a quotation I posted to Thought Shrapnel this week from Jon Ronson's So You've Been Publicly Shamed about how we're actually creating a more conservative environment, despite thinking we're all 'non-conformist'.
To be alive and online in our time is to feel at once incensed and stultified by the onrush of information, helpless against the rising tide of bad news and worse opinions. Nobody understands anything: not the global economy governed by the unknowable whims of algorithms, not our increasingly volatile and fragile political systems, not the implications of the impending climate catastrophe that forms the backdrop of it all. We have created a world that defies our capacity to understand it—though not, of course, the capacity of a small number of people to profit from it. Deleting your social-media accounts might be a means of making it more bearable, and even of maintaining your sanity. But one way or another, the world being what it is, we are going to have to learn to live in it.
Last week, at the ALT conference, those in the audience were asked by the speaker to 'stand up' if they felt imposter syndrome. I didn't get to my feet, but it wasn't an act of arrogance or hubris. I may have no idea what I'm doing, but I'm pretty sure no-one else does either.

Source: The New Yorker

The Digital Knowledge Loop

I’ve featured the work of Albert Wenger a few times before on Thought Shrapnel. He maintains a blog called Continuations and is writing a book called World After Capital.

In this post, he expands on a point he makes in his book around the ‘Digital Feedback Loop’ which, Wenger says, has three components:

  1. Economic freedom. We must let everyone meet their basic needs without being forced into the Job Loop. With economic freedom, we can embrace automation and enable everyone to participate in and benefit from the Digital Knowledge Loop.
  2. Informational freedom. We must remove barriers from the Digital Knowledge Loop that artificially limit learning from existing knowledge, creating new knowledge based on what we learn and sharing this new knowledge. At the same time must build systems that support the operation of critical inquiry in the Digital Knowledge Loop.
  3. Psychological freedom. We must free ourselves from scarcity thinking and its associated fears and other emotional reactions that impede our participation in the Digital Knowledge Loop. Much of the peril of the Digital Knowledge Loop arises directly from a lack of psychological freedom.
Wenger is a venture capitalist, albeit a seemingly-enlightened one. Interestingly, he's approaching the post-scarcity world through the lens of knowledge, economics, and society. As educators, I think we need to be thinking about similar things.

In fact, this reminds me of some work Martin Weller at the Open University has done around a pedagogy of abundance. After reviewing the effect of the ‘abundance’ model in the digital marketplace, looks at what that means for education. He concludes:

The issue for educators is twofold I would suggest: firstly how can they best take advantage of abundance in their own teaching practice, and secondly how do we best equip learners to make use of it? It is this second challenge that is perhaps the most significant. There is often consideration given to  transferable or key skills in education (eg Dearing 1997), but these have not been revisited to take into account the significant change that abundant and free content offers to learners... Coping with abundance then is a key issue for higher education, and one which as yet, it has not made explicit steps to meet, but as with many industries, adopting a  response which attempts to reinstate scarcity would seem to be a doomed enterprise.
Yesterday, during a break in our MoodleNet workshop with Outlandish, we were talking about the The Up Series of documentaries that showed just how much of a conveyer belt there is for children born into British society. I think part of the problem around that is we're locked into outdated models, as Wenger and Weller point out in their respective work.

My children, for example, with a few minor updates, are experiencing the very same state education I received a quarter of a century ago. The world has moved on, yet the mindset of scarcity remains. They’re not going to have a job for life. They don’t need to selfishly hold onto their ‘intellectual property’. And they certainly don’t need to learn how to sit still within a behaviourist classroom.

Source: Continuations

Kindness and courage (quote)

“Life is mostly froth and bubble, Two things stand like stone. Kindness in another’s trouble, Courage in your own.”

Adam Lindsay Gordon

The rise and rise of e-sports

I wouldn’t even have bothered clicking on this article if it weren’t for one simple fact: my son can’t get enough of this guy’s YouTube channel.

If you haven't heard of Ninja, ask the nearest 12-year-old. He shot to fame in March after he and Drake played Fortnite, the video game phenomenon in which 100 players are dropped onto an island and battle to be the last one standing while building forts that are used to both attack and hide from opponents. At its peak, Ninja and Drake's game, which also featured rapper Travis Scott and Pittsburgh Steelers receiver JuJu Smith-Schuster, pulled in 630,000 concurrent viewers on Twitch, Amazon's livestreaming platform, shattering the previous record of 388,000. Since then, Ninja has achieved what no other gamer has before: mainstream fame. With 11 million Twitch followers and climbing, he commands an audience few can dream of. In April, he logged the most social media interactions in the entire sports world, beating out the likes of Cristiano Ronaldo, Shaquille O'Neal and Neymar.
This article in ESPN is testament to the work that Ninja (a.k.a. Tyler Blevins) has done in crafting a brand and putting in the hours for over a decade. It sounds gruelling:
Tyler can't join us until he wraps up his six-hour stream. In the basement, past a well-stocked bar, a pool table and a dartboard, next to a foosball table, he sits on this sunny August day in a T-shirt and plaid pajama pants at the most famous space in their house, his gaming setup. It doesn't look like much -- a couple of screens, a fridge full of Red Bull, a mess of wires -- but from this modest corner he makes millions by captivating millions.

[…]

In college, Jess [his wife] started streaming to better understand why Tyler would go hours without replying to her texts. A day in, she realized how consuming it was. “It’s physically exhausting but also mentally because you’re sitting there constantly interacting,” Tyler says. “I’m engaging a lot more senses than if I were just gaming by myself. We’re not sitting there doing nothing. I don’t think anyone gets that."

The reason for sharing this here is because I’m going to use this as an example of deliberate practice.

How does he stay so good? Pro tip: Don't just play, practice. Ninja competes in about 50 games a day, and he analyzes each and every one. He never gets tired of it, and every loss hits him hard. Hypercompetitive, he makes sure he walks away with at least one win each day. (He averages about 15 and once got 29 in a single day.)

“When I die, I get so upset,” he says. “You can play every single day, you’re not practicing. You die, and oh well, you go onto the next game. When you’re practicing, you’re taking every single match seriously, so you don’t have an excuse when you die. You’re like, ‘I should have rotated here, I should have pushed there, I should have backed off.’ A lot of people don’t do that."

The article is worth a read, for several reasons. It shows why e-sports are going to be even bigger than regular sports for my children’s generation. It demonstrates how to get to the top in anything you have to put in the time and effort. And, perhaps, above all, it shows that, just as I’ve found, growing up spending time in front of screens can be pretty lucrative.

Source: ESPN

Online conformity (quote)

“We see ourselves as non-conformist, but I think all of this [online shaming] is creating a more conformist, conservative age… We are defining the boundaries of normality by tearing apart the people outside of it.”

(Jon Ronson, So You’ve Been Publicly Shamed)

A portal into a decentralised universe

You may recognise Cloudflare’s name from their provision of of ‘snapshots’ of websites that are currently experiencing problems. They do this through what’s called ‘distributed DNS’ which some of the issues around centralisation of the web. I use their 1.1.1.1 DNS service via Blokada on my smartphone to improve speed and privacy.

The ultimate goal, as we seek to move away from proprietary silos run by big tech companies (what I tend to call ‘SaaS with shareholders’), is to re-decentralise the web. I’ve already experimented with this, after speaking at a conference in Barcelona on the subject last October, and experimenting with my own ‘uncensorable’ blog using ZeroNet.

Up to now, however, it hasn’t been easy to jump from the regular ‘ol web (the one you’re used to browsing using https) and the distributed web (DWeb). You need a gateway to use a regular web browser with the DWeb. I set up one of these last year and quickly had to take it down as it was expensive to run!

I’m delighted, therefore, to see that Cloudflare have launched an IPFS gateway. IPFS stands for ‘InterPlanetary File System’ and is a “peer-to-peer hypermedia protocol to make the web faster, safer, and more open”. It does lots of cool stuff around redundancy and resilience that I won’t go into here. Suffice to say, it’s the future.

Today we’re excited to introduce Cloudflare’s IPFS Gateway, an easy way to access content from the InterPlanetary File System (IPFS) that doesn’t require installing and running any special software on your computer. We hope that our gateway, hosted at cloudflare-ipfs.com, will serve as the platform for many new highly-reliable and security-enhanced web applications. The IPFS Gateway is the first product to be released as part of our Distributed Web Gateway project, which will eventually encompass all of our efforts to support new distributed web technologies.
As I mentioned above, one of the issues with having a decentralised blog or website is that people can't access it on the regular web. This changes that, and hopefully in a way where we don't just end up with a new type of centralisation:
IPFS gateways are third-party nodes that fetch content from the IPFS network and serve it to you over HTTPS. To use a gateway, you don’t need to download any software or type any code. You simply open up a browser and type in the gateway’s name and the hash of the content you’re looking for, and the gateway will serve the content in your browser.
We're thinking about how IPFS could be used with the MoodleNet project I'm leading. If we're building a decentralised resource-centric social network it makes sense for those resources to be accessed in a decentralised way! Developments such as this make that much more likely to happen sometime soon.

Source: Cloudflare blog

(Related: The Guardian on the DWeb, and Fred Wilson’s take on Cloudflare’s IPFS gateway)

(Educational) consulting for the uninitiated

Noah Geisel, who I know from the world of Open Badges, has written a great post on how to be an educational consultant. I’ve got some advice of my own to add to his, but I’ll let him set the scene:

I get several messages each month from people — usually teachers — reaching out for an informational interview to learn about what options exist to be an education consultant. I’ve had the conversation enough times now that I’m sharing out this quick primer of what normally gets discussed. Maybe it’ll save you the cup of coffee you were going to buy me or help you come prepared to our coffee with novel questions that really make me think.

In my experience, people in employment who have never been their own boss are always interested at the prospect of becoming a freelancer or consultant. This is particularly the case with jobs like teaching that are endless time and energy pits.

Like Noah, the first thing I’d do is try and get underneath the desire to do something different. Why is that? I think he does this brilliantly by asking whether potential consultants are running towards, or running away, from something:

Which way are you running? This is the most important question. Are you running to a new opportunity or are you running away from your current situation? The people I know who are successful and happy doing this work definitely ran to it. The work is just too hard to be anything other than what you want (or even NEED) to be doing. I can’t speak for others but my own experience is that this path is a calling, not an escape pod.
I can only speak about my own experience, but once the Open Badges work went outside of Mozilla, and I'd pretty much done all I could with the Web Literacy work, it was time for me to move into consultancy. It was the logical step, both because I was ready for it, but also because people were asking if I was available.

If no-one’s asking if you can help them out with something that you already specialise in, then it’s going to be long, hard struggle to be seen as an expert, get gigs, and pay your mortgage. However, if you do decide to make the leap, I like the way Noah demarcates the types of consultancy you can do:

  1. Join forces with a known legacy brand
  2. Apply for a posted position
  3. Independent Consultant: hometown hero variety
  4. Independent Consultant: free agent variety
The first two of these are employment by a different name. The third might go well for a few months, but you're likely to quickly run out of clients, unless you lock them into a multi-year contract. Realistically, you need to go for the fourth option.

If you’ve been used to a job where you do lots of different things, such as teaching, the temptation is going to be to offer lots of different services. The trouble with that, of course, is that people find it difficult to know what you’re selling.

You are wise to avoid attempting to be all things to all people. Focus on a strength that gives you a competitive advantage and go hard; if you fail, you’ll want to know that wasn’t because you didn’t put enough into it.

One of the best things I've ever done is to set up a co-operative with friends and former colleagues. We have an associated Slack channel for both member discussions (private) and discourse with trusted colleagues and acquaintances. Meeting regularly, and doing work with these guys not only gives us flexibility, but access to a wider range of expertise than I could provide on my own.

As Noah says, it’s great to earn a bit of money on the side, but that’s very different to deciding that your going to rely on products you can sell and services you can provide for your income. I did it successfully for three years, before deciding to take my current four day per week position with Moodle, and work with the co-op on the side.

Finally, one thing that might help is to see your life as have ‘seasons’. I think too many people see their professional life as some kind of ladder which they need to climb. It’s nothing of the sort. It’s always nice to be well-paid (and I’ve never earned more than when I was consulting full-time) but there’s other things that are valuable in life: colleagues, security, and benefits such as a pension and healthcare, to name but a few.

Source: Verses

(Related: a post I wrote of my experiences after two years of full-time consultancy)

Blogging and content marketing (quote)

Content marketing and blogging may be diametrically opposed to each other, but one isn’t bad and the other good. There’s just what’s right for how you want to operate and what you need your content to do for you or your audience. It’s just something to consider – does the intent of how you want to exist as a content creator online actually line up with how you’re operating? If it doesn’t, perhaps it’s time to change.

(Paul Jarvis)

Issue [#316]: Is that better? 🙄 🙄 🙄

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Charity is no substitute for justice

The always-brilliant Audrey Watters eviscerates the latest project from a white, male billionaire to 'fix education'. Citing Amazon CEO Jeff Bezos' plan to open a series of "Montessori-inspired preschools in underserved communities" where "the child will be the customer", Audrey comments:

The assurance that “the child will be the customer” underscores the belief – shared by many in and out of education reform and education technology – that education is simply a transaction: an individual’s decision-making in a “marketplace of ideas.” (There is no community, no public responsibility, no larger civic impulse for early childhood education here. It’s all about privateschools offering private, individual benefits.)

As I've said on many occasions, everyone wakes up with cool ideas to change the world. The difference is that you or I would have to run it through many, many filters to get the funding to implement it. Those filters , hopefully, kill 99% of batshit-crazy ideas. Billionaires, in the other hand, can just speak and fund things into existence, no matter how damaging and I'll thought-out the ideas behind them happen to be.

[Teaching] is a field in which a third of employeesalready qualify for government assistance. And now Jeff Bezos, a man whose own workers also rely on these same low-income programs, wants to step in – not as a taxpayer, oh no, but as a philanthropist. Honestly, he could have a more positive impact here by just giving those workers a raise. (Or, you know, by paying taxes.)

This is the thing. We can do more and better together than we can do apart. The ideas of the many, honed over years, lead to better outcomes than the few thinking alone.

For all the flaws in the public school system, it’s important to remember: there is no accountability in billionaires’ educational philanthropy.

And, as W. B. Yeats famously never said, charity is no substitute for justice.

Whatever your moral and political views, accountability is something that cuts across the divide. I should imagine there are some reading this who send their kids to private schools and don't particularly see the problem with this. Isn't it just another example of competition within 'the market'?

The trouble with that kind of thinking, at least from my perspective, is twofold. First, it assumes that education is a private instead of a public good. Second, that it's OK to withhold money from society and then use that to subsidise the education of the already-privileged.

Source: Hack Education

Creativity (quote)

“Creativity is intelligence having fun.”

(Albert Einstein)

Audiobooks vs reading

Although I listen to a lot of podcasts (here’s my OPML file) I don’t listen to many audiobooks. That’s partly because I never feel up-to-date with my podcast listening, but also because I often read before going to sleep. It’s much more difficult to find your place again if you drift off while listening than while reading!

This article in TIME magazine (is it still a ‘magazine’?) looks at the research into whether listening to an audiobook is like reading using your eyes. Well, first off, it would seem that there’s no difference in recall of facts given a non-fiction text:

For a 2016 study, Rogowsky put her assumptions to the test. One group in her study listened to sections of Unbroken, a nonfiction book about World War II by Laura Hillenbrand, while a second group read the same parts on an e-reader. She included a third group that both read and listened at the same time. Afterward, everyone took a quiz designed to measure how well they had absorbed the material. “We found no significant differences in comprehension between reading, listening, or reading and listening simultaneously,” Rogowsky says.
However, the difficulty here is that there's already an observed discrepancy in recall between dead-tree books and e-books. So perhaps audiobooks are as good as e-books, but both aren't as good as printed matter?

There’s a really interesting point made in the article about how dead-tree books allow for a slight ‘rest’ while you’re reading:

If you’re reading, it’s pretty easy to go back and find the point at which you zoned out. It’s not so easy if you’re listening to a recording, Daniel says. Especially if you’re grappling with a complicated text, the ability to quickly backtrack and re-examine the material may aid learning, and this is likely easier to do while reading than while listening. “Turning the page of a book also gives you a slight break,” he says. This brief pause may create space for your brain to store or savor the information you’re absorbing.
This reminds me of an article on Lifehacker a few years ago that quoted a YouTuber who swears by reading a book while also listening to it:
First of all, it combines two senses…so you end up with really good comprehension while being really efficient at the same time. ...Another possibly even more important benefit is…it keeps you going. So you’re not going back and rereading things, you’re not taking all kinds of unnecessary breaks and pauses, your eyes aren’t running around all the time, and you’re not getting distracted every two minutes.
Since switching to an open source e-reader, I'm no longer using the Amazon Kindle ecosystem so much these days. If I were, I'd be experimenting with their WhisperSync technology that allows you to either pick up where you left up with one medium — or, indeed, use both at the same time.

Source: TIME / Lifehacker

What the EU's copyright directive means in practice

The EU is certainly coming out swinging against Big Tech this year. Or at least it thinks it is. Yesterday, the European Parliament voted in favour of three proposals, outlined by the EFF’s indefatigable Cory Doctorow as:

1. Article 13: the Copyright Filters. All but the smallest platforms will have to defensively adopt copyright filters that examine everything you post and censor anything judged to be a copyright infringement.
  1. Article 11: Linking to the news using more than one word from the article is prohibited unless you’re using a service that bought a license from the news site you want to link to. News sites can charge anything they want for the right to quote them or refuse to sell altogether, effectively giving them the right to choose who can criticise them. Member states are permitted, but not required, to create exceptions and limitations to reduce the harm done by this new right.

  2. Article 12a: No posting your own photos or videos of sports matches. Only the “organisers” of sports matches will have the right to publicly post any kind of record of the match. No posting your selfies, or short videos of exciting plays. You are the audience, your job is to sit where you’re told, passively watch the game and go home.

Music Week pointed out that Article 13 is particularly problematic for artists:

While the Copyright Directive covers a raft of digital issues, a sticking point within the music industry had been the adoption of Article 13 which seeks to put the responsibility on online platforms to police copyright in advance of posting user generated content on their services, either by restricting posts or by obtaining full licenses for copyrighted material.
The proof of the pudding, as The Verge points out, will be in the interpretation and implementation by EU member states:

However, those backing these provisions say the arguments above are the result of scaremongering by big US tech companies, eager to keep control of the web’s biggest platforms. They point to existing laws and amendments to the directive as proof it won’t be abused in this way. These include exemptions for sites like GitHub and Wikipedia from Article 13, and exceptions to the “link tax” that allow for the sharing of mere hyperlinks and “individual words” describing articles without constraint.

I can't help but think this is a ham-fisted way of dealing with a non-problem. As Doctorow also states, part of the issue here is the assumption that competition in a free market is at the core of creativity. I'd argue that's untrue, that culture is built by respectfully appropriating and building on the work of others. These proposals, as they currently stand (and as I currently understand them) actively undermine internet culture.

Source: Music Week / EFF / The Verge

The Amazon Echo as an anatomical map of human labor, data and planetary resources

This map of what happens when you interact with a digital assistant such as the Amazon Echo is incredible. The image is taken from a length piece of work which is trying to bring attention towards the hidden costs of using such devices.

With each interaction, Alexa is training to hear better, to interpret more precisely, to trigger actions that map to the user’s commands more accurately, and to build a more complete model of their preferences, habits and desires. What is required to make this possible? Put simply: each small moment of convenience – be it answering a question, turning on a light, or playing a song – requires a vast planetary network, fueled by the extraction of non-renewable materials, labor, and data. The scale of resources required is many magnitudes greater than the energy and labor it would take a human to operate a household appliance or flick a switch. A full accounting for these costs is almost impossible, but it is increasingly important that we grasp the scale and scope if we are to understand and govern the technical infrastructures that thread through our lives.
It's a tour de force. Here's another extract:
When a human engages with an Echo, or another voice-enabled AI device, they are acting as much more than just an end-product consumer. It is difficult to place the human user of an AI system into a single category: rather, they deserve to be considered as a hybrid case. Just as the Greek chimera was a mythological animal that was part lion, goat, snake and monster, the Echo user is simultaneously a consumer, a resource, a worker, and a product. This multiple identity recurs for human users in many technological systems. In the specific case of the Amazon Echo, the user has purchased a consumer device for which they receive a set of convenient affordances. But they are also a resource, as their voice commands are collected, analyzed and retained for the purposes of building an ever-larger corpus of human voices and instructions. And they provide labor, as they continually perform the valuable service of contributing feedback mechanisms regarding the accuracy, usefulness, and overall quality of Alexa’s replies. They are, in essence, helping to train the neural networks within Amazon’s infrastructural stack.
Well worth a read, especially alongside another article in Bloomberg about what they call 'oral literacy' but which I referred to in my thesis as 'oracy':
Should the connection between the spoken word and literacy really be so alien to us? After all, starting in the 1950s, basic literacy training in elementary schools in the United States has involved ‘phonics.’ And what is phonics but a way of attaching written words to the sounds they had been or could become? The theory grew out of the belief that all those lines of text on the pages of schoolbooks had become too divorced from their sounds; phonics was intended to give new readers a chance to recognize written language as part of the world of language they already knew.
The technological landscape is reforming what it means to be literate in the 21st century. Interestingly, some of that is a kind of a return to previous forms of human interaction that we used to value a lot more.

Sources: Anatomy of AI and Bloomberg

Working (quote)

“Those who work much, do not work hard.”

(Henry David Thoreau)

6 things that the best jobs have in common

Look at the following list and answer honestly the extent to which your current role, either as an employee or freelancer, matches up:

  1. Work that is engaging
  2. Work that benefits other people
  3. Work that you're good at (and feel valued for)
  4. Flexibility in how and where you work
  5. A lack of major negatives (e.g. long commute, unpredictable working hours)
  6. The chance for meaningful collaboration
I would wager that very few people could claim to be enjoying all six. I'm pretty close in my current position, I reckon, but of course it's easy to quickly forget how privileged I am.

It’s easier to see how remote positions fulfil points #4 and #5 than being employed in a particular place to work certain hours. On the other hand, the second part of #3 and #6 can be difficult remotely.

My advice? Focus on on #1 and #2 as they’re perhaps the most difficult to engineer. As an employee, look for interesting jobs with companies that have a pro-social mission. And if you’re a freelancer, once you’re financially secure, seek out gigs with similar.

Source: Fast Company


Image by WOCinTech Chat used under a CC BY license

Invisible turmoil (quote)

“It appear[s] like a calm existence [but] the turmoil is invisible.”

(Maira Kalman)

Simple sustainable stories

Some people are easy to follow online. They have one social media account to which they post regularly, and back that up with a single website where they expand on those points.

Stowe Boyd, whose work I’ve followed (or attempted to follow) for a few years now, is not one of these people. In fact, the number of platforms he tried earlier this year prompted me to get in touch with him to ask just how many platforms now had his subscribers' email addresses.

Ironically, it was only last week that I decided to support Stowe’s latest venture via Substack. However, in a post yesterday he explains that he’s going ‘back to square one’:

I won’t recapitulate the many transitions that have gone on in my search for the 'right’ newsletter/subscription technologies over the past year. But I have come to the conclusion that I am more interested in growing the community of Work Futures readers than I am in trying to make cash flow from it.
The thing I've learned about posting things to the internet over the last twenty years is that nobody cares. People support things that reflect who they believe themselves to be right now. That changes over time.

So if you’re putting things online, you have to make sure it works for you. Even the most fun jobs imaginable can become… something else if you focus too much on what a fickle audience wants.

As I said, I am motivated to take these steps in part by the desire to simplify my daily activities, and shelve work patterns that suck time. But I am equally motivated by making the discourse around these topics more open, while encouraging people to support Work Futures, but in that order of importance.
Openness always wins. You can support Stowe's work via donations, and my work via Patreon.

Source: Work Futures

Issue [#315]: Minimalism FTW

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

What do happy teenagers do?

This chart, via Psychology Today, is pretty unequivocal. It shows the activities correlated with happiness (green) and unhappiness (red) in American teenagers:

I discussed this with our eleven year-old son, who pretty much just nodded his head. I’m not sure he knew what to say, given that most of the things he enjoys doing in his free time are red on that chart!

Take a look at the bottom of the chart: Listening to music shows the strongest correlation with unhappiness. That may seem strange at first, but consider how most teens listen to music these days: On their phones, with earbuds firmly in place. Although listening to music is not screen time per se, it is a phone activity for the vast majority of teens. Teens who spend hours listening to music are often shutting out the world, effectively isolating themselves in a cocoon of sound.
This stuff isn't rocket science, I guess:
There’s another way to look at this chart – with the exception of sleep, activities that usually involve being with other people are the most strongly correlated with happiness, and those that involve being alone are the most strongly correlated with unhappiness. That might be why listening to music, which most teens do alone, is linked to unhappiness, while going to music concerts, which is done with other people, is linked to happiness. It’s not the music that’s linked to unhappiness; it’s the way it’s enjoyed. There are a few gray areas here. Talking on a cell phone and using video chat are linked to less happiness – perhaps because talking on the phone, although social connection, is not as satisfying as actually being with others, or because they are a phone activities even though they are not, strictly speaking, screen time. Working, usually done with others, is a wash, perhaps because most of the jobs teens have are not particularly fulfilling.
I might pin this up in the house somewhere for future reference...

Source: Psychology Today

Burnout-prevention rules

I’ve used quite a bit of Ben Werdmuller’s software over the years. He co-founded Elgg, which I used for some of my postgraduate work, and Known, which a few of us experimented with for blogging a few years ago.

Ben’s always been an entrepreneur and is currently working on blockchain technologies after working for an early stage VC company. He’s a thoughtful human being and writes about technology and the humans who create it, and in this post bemoans the macho work culture endemic in tech:

It’s not normal. Eight years into working in America, I’m still getting used to the macho culture around vacations. I had previously lived in a country where 28 days per year is the minimum that employers can legally provide; taking time off is just considered a part of life. The US is one of the only countries in the world that doesn’t guarantee any vacation at all (the others are Tonga, Palau, Nauru, Micronesia, Kiribati, and the Marshall Islands). It’s telling that American workers often respond to this simple fact with disbelief. How does anything get done?! Well, it turns out that a lot gets done when people aren’t burned out or chained to their desks.

Ben comes up with some 'rules':
  1. Take a real lunch hour
  2. Take short breaks and get a change of scenery
  3. Go home
  4. Rotate being on call — and automate as much as possible
  5. Always know when your next vacation is
  6. Employers: provide Time Off In Lieu (or pay for overtime)
  7. Trust
  8. Track and impose norms with structure
  9. Take responsibility for each other’s well being
All solid ideas, but only nine rules? I feel like there's a tenth one missing:
  1. Connect with a wider purpose

After all, if you don’t know the point of what you’re working for, then you’ll be lacking motivation no matter how many (or few) hours you work.

Source: Ben Werdmuller

Feedback from the community

In last week’s newsletter, the first after a month’s hiatus over the summer, I asked the 1,500+ subscribers to Thought Shrapnel’s if they’d send me answers to the following:

  1. What do you really like about Thought Shrapnel in its current format?
  2. What do you dislike about it?
So far, six days later, I've received 15 responses, which represents 1% of the subscriber base.

Here’s an anonymised sample of what they said:

  • "I like the diversity of links and ideas that you provide. Not sure if that helps."
  • "I don't dislike it, but some of the more technical stuff -- blockchain -- is less interesting to me than the educational stuff."
  • "You remind me of myself 40 years ago. Thank you."
  • "In it’s current format, it’s hard to save to Pocket for offline reading on an airplane (or someplace else without an Internet connection."
  • "I most appreciate your insight and perspective in these informational sources. That is to suggest...that I for one value when you provide context about the stories you're sharing. Furthermore, you dig in a bit deeper and educate about the nuance involved...but also the larger impact of this news."
  • "I’m happy to support the continuing collation of, and reflection on developments. Your thinking touches on multiple fields, and this is something I find particularly valuable."
  • "For me it's a better way of keeping up to date with what you're posting rather than getting notifications of each individual post through other channels."
  • "I also like how you point to new robust technologies which I’m on the lookout for."
  • "There's not much I don't like and, to be honest, I'm happy to skip the parts each week that aren't particularly relevant to me."
  • "I don't know Google's parameters for clipping emails, but I do know that I can click the "view full email" option to see it all, but I usually don't."
It's always nice to see kind words written about your work, which I obviously appreciate. The main thing it would seem that I need to change is that the newsletter gets 'clipped' by GMail and other providers. In other words, I need to make it shorter.

 

Expertise and knowledge (quote)

“With your expertise and knowledge, but you’ll never be an artist

And I’m harder on myself than you could ever be regardless

What I’ll never be is flawless, all I’ll ever be is honest

Even when I’m gone they’re gonna say I brought it”

(Eminem)

Fluency without conceptual understanding

I’ve been following Dan Meyer’s work on-and-off for over a decade now. He’s a Maths teacher by trade, but now working as Chief Academic Officer at Desmos after gaining his PhD from Stanford. He’s a smart guy, and a great blogger.

Dan’s particularly interested in how kids learn Maths (or ‘Math’ because he’s American) and is always particularly concerned to disprove/squash approaches that don’t work:

In the wake of Barbara Oakley’s op-ed in the New York Times arguing that we overemphasize conceptual understanding in math class, it’s become clear to me that our national conversation about math instruction is missing at least one crucial element: nobody knows what anybody means by “conceptual understanding.”
It's worth reading the whole post (and the comment section), but I just wanted to pull out a couple of things which I think are useful:
A student who has procedural fluency but lacks conceptual understanding …
  • Can accurately subtract 2018-1999 using a standard algorithm, but doesn’t recognize that counting up would be more efficient.
  • Can accurately compute the area of a triangle, but doesn’t recognize how its formula was derived or how it can be extended to other shapes. (eg. trapezoids, parallelograms, etc.)
  • Can accurately calculate the discriminant of y = x2 + 2 to determine that it doesn’t have any real roots, but couldn’t draw a quick sketch of the parabola to figure that out more efficiently.
I find this all the time with my own kids, and also when I was teaching. For example, I knew that the students in my Year 7 History class could draw a line graph in Maths, but they didn't seem to be able to do it in my classroom for some reason. In other words, they were 'procedurally fluent' in a particular domain.

Children are very good at giving the impression to adults that they understand and can do what they’re being told to do. Poke a little, and you come to realise that they don’t really understand what’s going on. That’s particularly true in History, where it’s easy to regurgitate facts and dates, without any empathy or historical understanding.

Another thing that Dan points out which I think we should all take to heart is that we should learn a bit of humility. He criticises both Barbara Oakley (op-ed in The New York Times) and Paul Morgan (author of an article with which he disagrees for not having what Nassim Nicholas Taleb would call ‘skin in the game':

If you’re going to engage with the ideas of a complex field, engage with its best. That’s good practice for all of us and it’s especially good practice for people who are commenting from outside the field like Oakley (trained in engineering) and Morgan (trained in education policy).
Everyone's got opinions. The important thing is to listen to those who are talking sense.

Source: dy/dan

Dealing with the downsides of remote working

A colleague, who also works remotely, shared this article recently. Although I enjoy working remotely, it’s not without its downsides.

The author, Martin De Wulf, is a coder writing for an audience of software engineers. That’s not me, but I do work in the world of tech. The things that De Wulf says makes remote working stressful are:

  1. Dehumanisation: "communication tends to stick to structured channels"
  2. Interruptions and multitasking: "being responsive on the chat accomplishes the same as being on time at work in an office: it gives an image of reliability"
  3. Overworking: "this all amounts for me to the question of trust: your employer trusted you a lot, allowing you to work on your own terms , and in exchange, I have always felt compelled to actually work a lot more than if I was in an office."
  4. Being a stay at home dad: "When you spend a good part of your time at home, your family sees you as more available than they should."
  5. Loneliness: "I do enjoy being alone quite a lot, but even for me, after two weeks of only seeing colleagues through my screen, and then my family at night, I end up feeling quite sad. I miss feeling integrated in a community of pairs."
  6. Deciding where to work every day: "not knowing where I will be working everyday, and having to think about which hardware I need to take with me"
  7. You never leave 'work': "working at home does not leave you time to cool off while coming back home from work"
  8. Career risk: "working remotely makes you less visible in your company"
I've managed to deal with at least half of this list. Here's some suggestions.
  • Video conference calls: they're not a replacement for face-to-face meetings, but they're a lot better than audio only or just relying on emails and text chats.
  • Home office: I have one separate to the house. Also, it sounds ridiculous but I've got a sign I bought on eBay that slides between 'free' (green) and 'busy' (red).
  • Travel: at every opportunity. Even though it takes me away from my wife and kids, I do see mine a lot more than the average office worker.
  • Realistic expectations: four hours of solid 'knowledge work' per day plus emails and admin tasks is enough.
Source: Hacker Noon

Natural light as an 'office perk'

You may not be able to detect it, but fluorescent lights flicker. They trigger my migraines. In fact, they affect me to such an extent that, when I worked at the university, I was on the ‘disabled’ list and had to have adjustments made. These included making sure I sat near a window to maximise the amount of natural light in my workspace.

In this HBR article, written by a partner at a HR advisory and research firm, the author cites a survey which shows that all employees want access to natural light

In a research poll of 1,614 North American employees, we found that access to natural light and views of the outdoors are the number one attribute of the workplace environment, outranking stalwarts like onsite cafeterias, fitness centers, and premium perks including on-site childcare.
One of the best things about working remotely ('from home') is that you can go and sit somewhere that has good natural light. There's a coffee shop near us that has two walls completely made of glass. It's wonderful.
The study also found that the absence of natural light and outdoor views hurts the employee experience. Over a third of employees feel that they don’t get enough natural light in their workspace. 47% of employees admit they feel tired or very tired from the absence of natural light or a window at their office, and 43% report feeling gloomy because of the lack of light.
The next point is an important one about hierarchies:
Too often, organizations design workspaces for executives with large windows while lower level employees do not have access to light. But it doesn’t have to be this way. Airbnb has pushed the limits of designing its customer call center operation in Portland, Oregon. Rather than windowless work stations commonly found in call centers, the Airbnb Call Center is designed to be an open space with access to natural light and views of the surroundings while replacing desks and phones with long couches, standing desks and wireless technology. The benefits of these elements is is well recognized. In fact, some European Union countries mandate employee proximity to windows as part of their national building code! This is because they realize that an absence of natural light hurts overall employee experience, up and down the organization.
I've been reading Vertical: The City from Satellites to Bunkers by Stephen Graham, which explores issues like these. Fascinating stuff.

Source: Harvard Business Review

Choice (quote)

“People who have no choice are generally unhappy. But people with too many choices are almost as unhappy as those who have no choice at all.”

(Ellen Ullman)

The importance of marginalia

Austin Kleon makes a simple, but important point, about how to become a writer:

I believe that the first step towards becoming a writer is becoming a reader, but the next step is becoming a reader with a pencil. When you underline and circle and jot down your questions and argue in the margins, you’re existing in this interesting middle ground between reader and writer:
Kleon has previously recommended Mortimer J. Adler and Charles Van Doren's How to Read a Book, which I bought last time he mentioned it. Ironically enough, it's sitting on my bookshelf, unread. Anyway, he quotes Adler and Van Doren as saying:
Full ownership of a book only comes when you have made it a part of yourself, and the best way to make yourself a part of it — which comes to the same thing — is by writing in it. Why is marking a book indispensable to reading it? First, it keeps you awake — not merely conscious, but wide awake. Second, reading, if it is active, is thinking, and thinking tends to express itself in words, spoken or written. The person who says he knows what he thinks but cannot express it usually does not know what he thinks. Third, writing your reactions down helps you to remember the thoughts of the author. Reading a book should be a conversation between you and the author….Marking a book is literally an expression of your differences or your agreements…It is the highest respect you can pay him.
I read a lot of non-fiction books on my e-reader*, so the equivalent of that for me is Thought Shrapnel, I guess...

Source: Austin Kleon

* Note: I left my old e-reader on the flight home from our holiday. I took the opportunity to upgrade to the bq Cervantes 4, which I bought from Amazon Spain.

We're back (with lots of new links!)

After a wonderful August, travelling with my family and taking time off from Thought Shrapnel, I’m back.

This is the 420th post here. I collect potential posts as drafts, which means I’ve currently got a backlog of 157 potential posts. Obviously, the vast majority of those are never going to see the light of day, so I thought I’d just link to them below.

Here’s a list of 10 articles from each of the first six months of 2018. They’re links that I never got around to writing about, but I think might interest you. Note that I’ve listed them in terms of when I discovered them, which is not necessarily when they were originally published.

January

  1. Fake News about the Future of Education
  2. Social Media Has Hijacked Our Brains and Threatens Global Democracy
  3. 10 New Principles Of Good Design
  4. Want to Change the World With Your Business? Grow Slow
  5. How children’s TV went from Blue Peter to YouTube’s wild west
  6. Autopsy of a Failed Holacracy: Lessons in Justice, Equity, and Self-Management
  7. The Great Attention Heist
  8. Android Users: To Avoid Malware, Try the F-Droid App Store
  9. Showing Off to the Universe: Beacons for the Afterlife of Our Civilization
  10. Will tech giants move on from the internet, now we’ve all been harvested?

February

  1. Your Pills Are Spying On You
  2. The Olympics are a mass propaganda tool for countries to assimilate their citizens
  3. Truly open education will require sweeping changes
  4. The media exaggerates negative news. This distortion has consequences
  5. Humanity's Biggest Machines Will Be Built in Space
  6. The usefulness of dread
  7. The Internet Isn't Forever
  8. Algorithmic Wilderness
  9. Are We Ready For a Post-Work World?
  10. If the elite ever cared about the have-nots, that didn’t last long

March

  1. Education in the (Dis)Information Age
  2. How Tiny Red Dots Took Over Your Life
  3. If you’re so smart, why aren’t you rich? Turns out it’s just chance.
  4. Twitter is not a public utility
  5. The Grim Conclusions of the Largest-Ever Study of Fake News
  6. Small, Regular Doses of Caffeine Offer the Biggest Mental Boost
  7. Bitcoin Is Ridiculous. Blockchain Is Dangerous.
  8. Beyond the Tree Octopus – Why we need a new view of k12 (digital) literacy in a Cambridge Analytica world
  9. I work therefore I am: why businesses are hiring philosophers
  10. Critical Thinking for Educators

April

  1. Researchers develop device that can 'hear' your internal voice
  2. 12 Things Everyone Should Understand About Tech
  3. What Comes After The Social Media Empires
  4. Coming up with a title
  5. Eminent Philosophers Name the 43 Most Important Philosophy Books Written Between 1950-2000: Wittgenstein, Foucault, Rawls & More
  6. An Open Education Reader
  7. Against metrics: how measuring performance by numbers backfires
  8. Say Goodbye To The Information Age: It’s All About Reputation Now
  9. Why co-operative education needs a rethink
  10. A Modest Guide to Productivity

May

  1. Alfie’s Army, misinformation and propaganda: The need for critical media literacy in a mediated world
  2. Hot Prospect: Designer Richard Holbrook’s Three-Year Quest to Understand the World’s Most Creative Companies
  3. Chromebooks are ready for your next coding project
  4. Tech firms can't keep our data forever: we need a Digital Expiry Date
  5. How to achieve happiness and balance as a remote worker
  6. Create Kid Skills for Alexa
  7. Should Africa let Silicon Valley in?
  8. Autocrypt: convenient end-to-end encryption for email
  9. Scouts' new visual identity designed to diversify membership
  10. A cartoon intro to DNS over HTTPS

June

  1. Do platforms work?
  2. Why read Aristotle today?
  3. The Uncertain Future of OER
  4. Chatbots were the next big thing: what happened?
  5. The Theology of Consensus
  6. Building a Cooperative Economy
  7. What’s right for your company? Decision making in 3 different organizational structures
  8. The ethics of ceding more power to machines
  9. UTC is Enough for Everyone... Right?
  10. It’s impossible to lead a totally ethical life—but it’s fun to try

Please consider supporting this work via Patreon. It’s the best way of demonstrating your appreciation for Doug’s time and effort, and ensures that Thought Shrapnel keeps going — not just for you, but for everyone. 👍

A Stoic (quote)

“A Stoic is someone who transforms fear into prudence, pain into transformation, mistakes into initiation, and desire into undertaking.”

(Nassim Nicholas Taleb)

Tracking vs advertising

We tend to use words to denote something right up to the time that term becomes untenable. Someone has to invent a better one. Take mobile phones, for example. They’re literally named after the least-used app on there, so we’re crying out for a different way to refer to them. Perhaps a better name would be ‘trackers’.

These days, most people use mobile devices for social networking. These are available free at the point of access, funded by what we’re currently calling ‘advertising’. However, as this author notes, it’s nothing of the sort:

What we have today is not advertising. The amount of personally identifiable information companies have about their customers is absolutely perverse. Some of the world’s largest companies are in the business of selling your personal information for use in advertising. This might sound innocuous but the tracking efforts of these companies are so accurate that many people believe that Facebook listens to their conversations to serve them relevant ads. Even if it’s true that the microphone is not used, the sum of all other data collected is still enough to show creepily relevant advertising.

Unfortunately, the author doesn’t seem to have come to the conclusion yet that it’s the logic of capitalism that go us here. Instead, he just points out that people’s privacy is being abused.

[P]eople now get most of their information from social networks yet these networks dictate the order in which content is served to the user. Google makes the worlds most popular mobile operating system and it’s purpose is drive the company’s bottom line (ad blocking is forbidden). “Smart” devices are everywhere and companies are jumping over each other to put more shit in your house so they can record your movements and sell the information to advertisers. This is all a blatant abuse of privacy that is completely toxic to society.
Agreed, and it's easy to feel a little helpless against this onslaught. While it's great to have a list of things that users can do, if those things are difficult to implement and/or hard to understand, then it's an uphill battle.

That being said, the three suggestions he makes are use

To combat this trend, I have taken the following steps and I think others should join the movement:
  • Aggressively block all online advertisements
  • Don’t succumb to the “curated” feeds
  • Not every device needs to be “smart”
I feel I'm already way ahead of the author in this regard:
  • Aggressively block all online advertisements
  • Don’t succumb to the “curated” feeds
    • I quit Facebook years ago, haven't got an Instagram account, and pretty much only post links to my own spaces on Twitter and LinkedIn.
  • Not every device needs to be “smart”
    • I don't really use my Philips Hue lights, and don't have an Amazon Alexa — or even the Google Assistant on my phone).
It's not easy to stand up to Big Tech. The amount of money they pour into things make their 'innovations' seem inevitable. They can afford to make things cheap and frictionless so you get hooked.

As an aside, it’s interesting to note that those that previously defended Apple as somehow ‘different’ on privacy, despite being the world’s most profitable company, are starting to backtrack.

Source: Nicholas Rempel

Keeping track of articles you want to read

One of the things I like about Hacker News is that, as well as providing useful links to technically-minded stuff, there are also ‘Ask HN’ threads where a user asks a question of the rest of the community.

Ask HN: How do you keep track of articles you want to read?

When I browse HN, I usually pick out a few articles I want to read from the front page, then email the links to myself to read later.

This method works out pretty well for me. I’m wondering if people have other strategies that work better?

I don’t like the ‘inbox as to-do list’ method. Other HN users suggested alternatives, with the top-voted comment at the time of writing this being:

I used Instapaper (https://www.instapaper.com/), then moved to Pocket (https://getpocket.com/) to take advantage of the social features, then moved back to Instapaper for no really good reason. Pocket still looks nicer and the apps are more reliable, in my experience.

They both allow you to save the full text of an article to read later, as well as archiving and organizing articles you’ve already read. They sync to phones, so most of my reading actually happens on public transit. Pocket can also sync to a Kobo ebook reader; not sure about Kindle, but I wouldn’t be surprised if it worked with them, too.

Pocket is great, but I used IFTTT to automatically send RSS feeds there at one point, and now it seems to be in an endless sync loop.

Other HN users said that they pin bookmarks, and so have many, many tabs open at one time. I think that’s a hugely inefficient and resource-intensive approach.

Some kept it super-simple:

I use Org Mode so I have a plain text file called todo-bookmarks.org with a list of links to the articles I want to read.
This caused me to think about what I do. If I want to read something, I actually add the link as a draft post here, on Thought Shrapnel. The best way to ensure I gain value from a potentially-interesting article is to write about it.

I’d rather write about a few links rather than bookmark lots. I’ve all but given up on bookmarking, as it’s almost as quick to search the web for something I’m looking for as it is to search my bookmarks…

Source: Hacker News

Issue #314: Final Holiday Countdown 🏁 ⏲️ 🏖️

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Introverts, collaboration, and creativity

I work, for the most part, in my home office. Physically-speaking it’s a solitary existence as my office is separate to my house. However, I’m constantly talking to people via Telegram, Slack, and Mastodon. It doesn’t feel lonely at all.

So this article about collaboration, which I discovered via Stowe Boyd, is an interesting one:

If you’re looking to be brave and do something entirely new, involving more people at the wrong time could kill your idea.

Work at MIT found that collaboration—where a bunch of people put their heads together to try to come up with innovative solutions—generally “reduced creativity due to the tendency to incrementally modify known successful designs rather than explore radically different and potentially superior ones.”

I’m leading a project at the moment which is scheduled to launch in January 2019. It’s potentially going to be used by hundreds of people in the MVP, and then thousands (and maybe hundreds of thousands) after that.

Yet, when I was asked recently whether I’d like more resources, I said “after the summer”. Why? Because every time you add someone new, it temporarily slows down your project. The same can be true when you’re coming up with ideas. You can go faster alone, but further together.

Many people are at their most creative during solitary activities like walking, relaxing or bathing, not when stuck in a room with people shouting at them from a whiteboard.

Indeed a study found that “solitude can facilitate creativity–first, by stimulating imaginative involvement in multiple realities and, second, by ‘trying on’ alternative identities, leading, perhaps, to self-transformation.”

Essentially just being around other people can keep creative people from thinking new thoughts.

I think this article goes a little too far in discounting the value of collaboration. For example, here’s three types of facilitated thinking that I have experience with that work well for both introverts and extroverts:

  1. Thinkathons
  2. Note and vote
  3. Crazy eights
That being said, I do agree with the author when he says:
Once you’ve unearthed radical ideas from people, they need nurturing. They need protecting from group-think meetings and committees who largely express speculated unevidenced opinions based on current preferences from past experiences.

Design thinking has a bias towards action: it resists talking yourself out of trying something radical. Creating prototypes helps you to think about your idea in a concrete manner, and get it to test before it gets dumbed down.

Chances are, that crazy idea you had will get toned down if you let too many people look at it. Protect the radical and push it hard!

Source: Paul Taylor (via Stowe Boyd)

Busyness and value creation

I subscribe to both Seth Godin’s blog and his podcast, Akimbo. The man’s a genius as far as I’m concerned.

One of his most recent posts is about productivity:

Now, more than ever, you’re likely to be running a team, managing a project or deciding on your own agenda as a free agent. Time is just about all you’ve got to spend.

And yet, we hardly talk about productivity.

Productivity is the amount of useful output created for every hour of work we do.

You can measure that output in money if you want to (it makes the math easier) but in fact, it’s everything from lives changed to knowledge shared. What matters is the answer to a simple question: did I spend my day producing enough benefit for all the time invested?

So far, standard stuff. What I like is the way he applies it to our current situation in 2018:

The internet has opened the door for more people to organize and plan their day than ever before. And we’re bad at it.

Because we associate busyness with business with productivity.

In my twenties, when I worked in schools, I worked 12+ hours every day. Now I work half that. Why? Because I work from home and can manage my own time. I’m rarely just waiting around or kicking my heels:

Imagine two buildings under construction. Both have 25 well-trained, well-paid, hard-working construction workers. One building, though, was built in half the time of the other. What happened? It turns out that construction almost always slows down because people are waiting. Waiting for the waterproofing to get done (while they wait for the specialist) or waiting for parts or waiting for another part of the project. The internet is the home of the connection economy, which means that this challenge is multiplied by 100. What are you waiting for? When you’re waiting, what are you doing to create value?
It's a useful read, particularly if you feel that you're at a crossroads in your career. You should always go towards that which gives you more agency. That way, you get more of a say in how productive you can be in any given day.
Busy is not your job. Busy doesn’t get you what you seek. Busy isn’t the point. Value creation is.
Source: Seth Godin

Original work (quote)

“To do original work: It’s not necessary to know something nobody else knows. It is necessary to believe something few other people believe.”

(Marc Andreessen)

Assassination markets now available on the blockchain

I first mentioned so-called ‘assassination markets’ in one of my weeknotes back in 2015 when reporting back on a dinner party conversation. For those unfamiliar, the idea has been around for at least the last twenty years.

Here’s how Wikipedia defines them:

An assassination market is a prediction market where any party can place a bet (using anonymous electronic money and pseudonymous remailers) on the date of death of a given individual, and collect a payoff if they "guess" the date accurately. This would incentivise assassination of individuals because the assassin, knowing when the action would take place, could profit by making an accurate bet on the time of the subject's death. Because the payoff is for accurately picking the date rather than performing the action of the assassin, it is substantially more difficult to assign criminal liability for the assassination.
Of course, the blockchain is a trustless system, so perfect for this kind of thing. A new platform called Augur is a prediction market and so, of course, one of the first things that appears on there are 'predictions' about the death of Donald Trump in 2018:
Everyone knew that it was inevitable that assassination markets would quickly pop up on decentralized prediction market platform Augur, but that doesn’t make the fact that users are now betting on whether U.S. President Donald Trump will be assassinated by the end of the year any less jarring.

Yet this market exists, and, though not the most popular bet on Augur, more than 50 shares have been traded on it as of the time of writing. Similar markets, moreover, exist for a number of other public figures, allowing users to gamble on whether 96-year-old actress Betty White and U.S. Senator John McCain — who has been diagnosed with brain cancer — will survive until Jan. 1, 2019.

This is why ethics in technology are important. There is no such thing as a ‘neutral’ technology:

Now that assassination markets are here, a fierce debate has emerged in cryptocurrency circles over what — if anything — should be done about them, as well as who should be held responsible for these clearly-illegal death markets.

The core issue stems from the fact that, in addition to the pure revulsion that these markets should engender in any decent human being, they also create a financial incentive for someone to place a large bet that a public figure will be assassinated and then murder that person for profit. Consequently, the mere presence of these markets makes it more likely that these events will occur, however slim that increase may be.

Interesting times, indeed.

Source: CCN

Not my circus (quote)

“Not my circus. Not my monkeys.”

(Polish proverb)

When we eat matters

As I get older, I’m more aware that some things I do are very affected by the world around me. For example, since finding out that the intensity of light you experience during the day is correlated with the amount of sleep you get, I don’t feel so bad about ‘sleeping in’ during the summer months.

So it shouldn’t be surprising that this article in The New York Times suggests that there’s a good and a bad time to eat:

A growing body of research suggests that our bodies function optimally when we align our eating patterns with our circadian rhythms, the innate 24-hour cycles that tell our bodies when to wake up, when to eat and when to fall asleep. Studies show that chronically disrupting this rhythm — by eating late meals or nibbling on midnight snacks, for example — could be a recipe for weight gain and metabolic trouble.

A more promising approach is what some call 'intermittent fasting' where you restrict your calorific intake to eight hours of the day, and don't consume anything other than water for the other 16 hours.
This approach, known as early time-restricted feeding, stems from the idea that human metabolism follows a daily rhythm, with our hormones, enzymes and digestive systems primed for food intake in the morning and afternoon. Many people, however, snack and graze from roughly the time they wake up until shortly before they go to bed. Dr. Panda has found in his research that the average person eats over a 15-hour or longer period each day, starting with something like milk and coffee shortly after rising and ending with a glass of wine, a late night meal or a handful of chips, nuts or some other snack shortly before bed.

That pattern of eating, he says, conflicts with our biological rhythms.

So when should we eat? As early as possible in the day, it would seem:

Most of the evidence in humans suggests that consuming the bulk of your food earlier in the day is better for your health, said Dr. Courtney Peterson, an assistant professor in the department of nutrition sciences at the University of Alabama at Birmingham. Dozens of studies demonstrate that blood sugar control is best in the morning and at its worst in the evening. We burn more calories and digest food more efficiently in the morning as well.
That's not great news for me. After a protein smoothie in the morning and eggs for lunch, I end up eating most of my calories in the evening. I'm going to have to rethink my regime...

Source: The New York Times

LinkedIn: the game?

Just like Facebook, I’ve deleted my LinkedIn account a couple of times. The difference is that I keep coming back to LinkedIn for some reason, while I’m a very happy non-user of Facebook.

This article imagines LinkedIn as a ‘game’ that you can win or lose. The framing is both hilarious and insightful, with the subtitle reading, “A strategy guide for using a semi-pointless social network in all the wrong ways.”

For those unfamiliar, LinkedIn is a 2D, turn-based MMORPG that sets itself apart from its competitors by placing players not in a fantasy world of orcs and goblins, but in the treacherous world of business. Players can choose from dozens of character classes (e.g., Entrepreneurs, Social Media Mavens, Finance Wizards) each with their own skill sets and special moves (Power Lunch; Signal Boost; Invoice Dodge). They gain “experience” by networking, obtaining endorsements from other users, and posting inspirational quotes from Elon Musk.

The general goal of LinkedIn (the game) is to find and connect with as many people on LinkedIn (the website) as possible, in order to secure vaguely defined social capital and potentially further one’s career, which allows the player to purchase consumer goods of gradually increasing quality. Like many games, it has dubious real-life utility. The site’s popularity and success, like that of many social networks, depends heavily on obfuscating this fact. This illusion of importance creates a sense of naive trust among its users. This makes it easy to exploit.

Yep, LinkedIn makes its money in a similar way to Facebook: allow users to create contacts on a platform completely owned by one company (which is now Microsoft). Then, charge them to beat the algorithm you created.

Some people I know pay for LinkedIn Premium. I’ve never understood why when it’s effectively the front end for an address book. Instead, I pay for FullContact, which is a much better deal, long-term.

Nevertheless, if you’re playing the LinkedIn game, here’s what to do:

Spend a few hours each day connecting with people. Start by searching for employees at powerful corporations like Google and Facebook. As users within various spheres of influence accept your connection requests, you will begin to gain legitimacy. At first a few people might decline your request, but eventually, once your network grows, important people will see that others they know are already connected with you, and accept your invitation without suspicion. Work your way through the corporate food chain like an intestinal parasite at a gratis conference buffet.
As the author notes with a wink and a nod, there are multiple ways of gaming the system, including:
Because there’s no limit to the number of jobs one can have simultaneously, it’s incredibly easy to spam people with superfluous work anniversaries. All you have to do is create 12 active jobs, each with a different starting month. (As far as I can tell, LinkedIn only sends one work anniversary email per user per month, so it’s not worth the trouble to input more than 12.)
I honestly don't know why I continue to use LinkedIn. People message me on their occasionally, and I send (some of) my blog posts there. Other than that, it seems like people farming, just like a business version of Facebook.

Source: The Outline

Data transfer as a 'hedge'?

This is an interesting development:

Today, Google, Facebook, Microsoft, and Twitter joined to announce a new standards initiative called the Data Transfer Project, designed as a new way to move data between platforms. In a blog post, Google described the project as letting users “transfer data directly from one service to another, without needing to download and re-upload it.”

This, of course, would probably not have happened without GDPR. So how does it work?

The existing code for the project is available open-source on GitHub, along with a white paper describing its scope. Much of the codebase consists of “adapters” that can translate proprietary APIs into an interoperable transfer, making Instagram data workable for Flickr and vice versa. Between those adapters, engineers have also built a system to encrypt the data in transit, issuing forward-secret keys for each transaction. Notably, that system is focused on one-time transfers rather than the continuous interoperability enabled by many APIs.

I may be being cynical, but just because something is open source doesn't mean that it's a level playing field for everyone. In fact, I'd wager that this is large companies hedging against new entrants to the market.

The project was envisioned as an open-source standard, and many of the engineers involved say a broader shift in governance will be necessary if the standard is successful. “In the long term, we want there to be a consortium of industry leaders, consumer groups, government groups,” says Fair. “But until we have a reasonable critical mass, it’s not an interesting conversation.”

This would be great if it pans out in the way it's presented in the article. My 20+ years experience on the web, however, would suggest otherwise.

Source: The Verge

Issue #313: Mootivation

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Childhood amnesia

My kids will often ask me about what I was like at their age. It might be about how fast I swam a couple of length freestyle, it could be what music I was into, or when I went on a particular holiday I mentioned in passing. Of course, as I didn’t keep a diary as a child, these questions are almost impossible to answer. I simply can’t remember how old I was when certain things happened.

Over and above that, though, there’s some things that I’ve just completely forgotten. I only realise this when I see, hear, or perhaps smell something that reminds me of a thing that my conscious mind had chosen to leave behind. It’s particularly true of experiences from when we are very young. This phenomenon is known as ‘childhood amnesia’, as an article in Nautilus explains:

On average, people’s memories stretch no farther than age three and a half. Everything before then is a dark abyss. “This is a phenomenon of longstanding focus,” says Patricia Bauer of Emory University, a leading expert on memory development. “It demands our attention because it’s a paradox: Very young children show evidence of memory for events in their lives, yet as adults we have relatively few of these memories.”

In the last few years, scientists have finally started to unravel precisely what is happening in the brain around the time that we forsake recollection of our earliest years. “What we are adding to the story now is the biological basis,” says Paul Frankland, a neuroscientist at the Hospital for Sick Children in Toronto. This new science suggests that as a necessary part of the passage into adulthood, the brain must let go of much of our childhood.

Interestingly, our seven year-old daughter is on the cusp of this forgetting. She’s slowly forgetting things that she had no problem recalling even last year, and has to be prompted by photographs of the event or experience.

One experiment after another revealed that the memories of children 3 and younger do in fact persist, albeit with limitations. At 6 months of age, infants’ memories last for at least a day; at 9 months, for a month; by age 2, for a year. And in a landmark 1991 study, researchers discovered that four-and-a-half-year-olds could recall detailed memories from a trip to Disney World 18 months prior. Around age 6, however, children begin to forget many of these earliest memories. In a 2005 experiment by Bauer and her colleagues, five-and-a-half-year-olds remembered more than 80 percent of experiences they had at age 3, whereas seven-and-a-half-year-olds remembered less than 40 percent.
It's fascinating, and also true of later experiences, although to a lesser extent. Our brains conceal some of our memories by rewiring our brain. This is all part of growing up.
This restructuring of memory circuits means that, while some of our childhood memories are truly gone, others persist in a scrambled, refracted way. Studies have shown that people can retrieve at least some childhood memories by responding to specific prompts—dredging up the earliest recollection associated with the word “milk,” for example—or by imagining a house, school, or specific location tied to a certain age and allowing the relevant memories to bubble up on their own.
So we shouldn't worry too much about remembering childhood experiences in high-fidelity. After all, it's important to be able to tell new stories to both ourselves and other people, casting prior experiences in a new light.

Source: Nautilus

Childhood amnesia

My kids will often ask me about what I was like at their age. It might be about how fast I swam a couple of length freestyle, it could be what music I was into, or when I went on a particular holiday I mentioned in passing. Of course, as I didn’t keep a diary as a child, these questions are almost impossible to answer. I simply can’t remember how old I was when certain things happened.

Over and above that, though, there’s some things that I’ve just completely forgotten. I only realise this when I see, hear, or perhaps smell something that reminds me of a thing that my conscious mind had chosen to leave behind. It’s particularly true of experiences from when we are very young. This phenomenon is known as ‘childhood amnesia’, as an article in Nautilus explains:

On average, people’s memories stretch no farther than age three and a half. Everything before then is a dark abyss. “This is a phenomenon of longstanding focus,” says Patricia Bauer of Emory University, a leading expert on memory development. “It demands our attention because it’s a paradox: Very young children show evidence of memory for events in their lives, yet as adults we have relatively few of these memories.”

In the last few years, scientists have finally started to unravel precisely what is happening in the brain around the time that we forsake recollection of our earliest years. “What we are adding to the story now is the biological basis,” says Paul Frankland, a neuroscientist at the Hospital for Sick Children in Toronto. This new science suggests that as a necessary part of the passage into adulthood, the brain must let go of much of our childhood.

Interestingly, our seven year-old daughter is on the cusp of this forgetting. She’s slowly forgetting things that she had no problem recalling even last year, and has to be prompted by photographs of the event or experience.

One experiment after another revealed that the memories of children 3 and younger do in fact persist, albeit with limitations. At 6 months of age, infants’ memories last for at least a day; at 9 months, for a month; by age 2, for a year. And in a landmark 1991 study, researchers discovered that four-and-a-half-year-olds could recall detailed memories from a trip to Disney World 18 months prior. Around age 6, however, children begin to forget many of these earliest memories. In a 2005 experiment by Bauer and her colleagues, five-and-a-half-year-olds remembered more than 80 percent of experiences they had at age 3, whereas seven-and-a-half-year-olds remembered less than 40 percent.
It's fascinating, and also true of later experiences, although to a lesser extent. Our brains conceal some of our memories by rewiring our brain. This is all part of growing up.
This restructuring of memory circuits means that, while some of our childhood memories are truly gone, others persist in a scrambled, refracted way. Studies have shown that people can retrieve at least some childhood memories by responding to specific prompts—dredging up the earliest recollection associated with the word “milk,” for example—or by imagining a house, school, or specific location tied to a certain age and allowing the relevant memories to bubble up on their own.
So we shouldn't worry too much about remembering childhood experiences in high-fidelity. After all, it's important to be able to tell new stories to both ourselves and other people, casting prior experiences in a new light.

Source: Nautilus

You cant escape your problems through travel

I work from home, but travel quite a bit for the kind of work I do. I’ve noticed how, after three weeks of being based at home, I get restless. The four walls of my home office get a little bit stifling, even if I do augment them with the occasional working visit to the local coffee shop.

Work travel is, of course, different to holiday/vacation. However, as I write this from Montana, USA, I’m reminded how easy it is to slip into the mindset of how travel or money or a relationship can solve your problems in life.

This heavily-illustrated article is a good reminder that your need to sort out your life is independent from external things, including travel.

Travel is the answer much of us look to when we feel the automation of life. The routine of waking up, getting ready, going to work, eating the same lunch, sitting in meetings, getting off work, going home, eating dinner, relaxing, going to sleep, and then doing it all over again can feel like a never-ending road that is housed within the confines of a mundane box.

The reason I read Stoic philosophy every day is that it can give you a perspective of happiness that is independent of location, financial circumstances, or relationship status.

Since much of what we desire lives on the outside (i.e. in the future), we make it the mission of our Box of Daily Experience to make contact with the outer world as much as possible. This touch represents the achievement of our goals and validates our aspirations. We hope that this brief contact will change the architecture of our box, but ultimately, the result is fleeting.

Epictetus, the Stoic philosopher, was lame and, it is thought, an ex-slave. We only know his teachings from the notes that his students made, but his message is pretty clear. Here's the very first section of the Enchiridion. It might not change your life the first time you read it, but try reading it every day for a month:
Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our own actions.

The things in our control are by nature free, unrestrained, unhindered; but those not in our control are weak, slavish, restrained, belonging to others. Remember, then, that if you suppose that things which are slavish by nature are also free, and that what belongs to others is your own, then you will be hindered. You will lament, you will be disturbed, and you will find fault both with gods and men. But if you suppose that only to be your own which is your own, and what belongs to others such as it really is, then no one will ever compel you or restrain you. Further, you will find fault with no one or accuse no one. You will do nothing against your will. No one will hurt you, you will have no enemies, and you not be harmed.

Aiming therefore at such great things, remember that you must not allow yourself to be carried, even with a slight tendency, towards the attainment of lesser things. Instead, you must entirely quit some things and for the present postpone the rest. But if you would both have these great things, along with power and riches, then you will not gain even the latter, because you aim at the former too: but you will absolutely fail of the former, by which alone happiness and freedom are achieved.

Work, therefore to be able to say to every harsh appearance, “You are but an appearance, and not absolutely the thing you appear to be.” And then examine it by those rules which you have, and first, and chiefly, by this: whether it concerns the things which are in our own control, or those which are not; and, if it concerns anything not in our control, be prepared to say that it is nothing to you.

The only thing that can make you happy, calm, and contented is controlling your reactions to external prompts. That’s it. But it takes a lifetime to figure out.

Source: More To That

You cant escape your problems through travel

I work from home, but travel quite a bit for the kind of work I do. I’ve noticed how, after three weeks of being based at home, I get restless. The four walls of my home office get a little bit stifling, even if I do augment them with the occasional working visit to the local coffee shop.

Work travel is, of course, different to holiday/vacation. However, as I write this from Montana, USA, I’m reminded how easy it is to slip into the mindset of how travel or money or a relationship can solve your problems in life.

This heavily-illustrated article is a good reminder that your need to sort out your life is independent from external things, including travel.

Travel is the answer much of us look to when we feel the automation of life. The routine of waking up, getting ready, going to work, eating the same lunch, sitting in meetings, getting off work, going home, eating dinner, relaxing, going to sleep, and then doing it all over again can feel like a never-ending road that is housed within the confines of a mundane box.

The reason I read Stoic philosophy every day is that it can give you a perspective of happiness that is independent of location, financial circumstances, or relationship status.

Since much of what we desire lives on the outside (i.e. in the future), we make it the mission of our Box of Daily Experience to make contact with the outer world as much as possible. This touch represents the achievement of our goals and validates our aspirations. We hope that this brief contact will change the architecture of our box, but ultimately, the result is fleeting.

Epictetus, the Stoic philosopher, was lame and, it is thought, an ex-slave. We only know his teachings from the notes that his students made, but his message is pretty clear. Here's the very first section of the Enchiridion. It might not change your life the first time you read it, but try reading it every day for a month:
Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our own actions.

The things in our control are by nature free, unrestrained, unhindered; but those not in our control are weak, slavish, restrained, belonging to others. Remember, then, that if you suppose that things which are slavish by nature are also free, and that what belongs to others is your own, then you will be hindered. You will lament, you will be disturbed, and you will find fault both with gods and men. But if you suppose that only to be your own which is your own, and what belongs to others such as it really is, then no one will ever compel you or restrain you. Further, you will find fault with no one or accuse no one. You will do nothing against your will. No one will hurt you, you will have no enemies, and you not be harmed.

Aiming therefore at such great things, remember that you must not allow yourself to be carried, even with a slight tendency, towards the attainment of lesser things. Instead, you must entirely quit some things and for the present postpone the rest. But if you would both have these great things, along with power and riches, then you will not gain even the latter, because you aim at the former too: but you will absolutely fail of the former, by which alone happiness and freedom are achieved.

Work, therefore to be able to say to every harsh appearance, “You are but an appearance, and not absolutely the thing you appear to be.” And then examine it by those rules which you have, and first, and chiefly, by this: whether it concerns the things which are in our own control, or those which are not; and, if it concerns anything not in our control, be prepared to say that it is nothing to you.

The only thing that can make you happy, calm, and contented is controlling your reactions to external prompts. That’s it. But it takes a lifetime to figure out.

Source: More To That

Don Norman on human-centred technologies

In this article, Don Norman (famous for his seminal work The Design of Everyday Things) takes to task our technology-centric view of the world:

We need to switch from a technology-centric view of the world to a people-centric one. We should start with people’s abilities and create technology that enhances people’s capabilities: Why are we doing it backwards?
Instead of focusing on what we as humans require, we start with what technology is able to provide. Norman argues that it is us serving technology rather than the other way around:
Just think about your life today, obeying the dictates of technology–waking up to alarm clocks (even if disguised as music or news); spending hours every day fixing, patching, rebooting, inventing work-arounds; answering the constant barrage of emails, tweets, text messages, and instant this and that; being fearful of falling for some new scam or phishing attack; constantly upgrading everything; and having to remember an unwieldly number of passwords and personal inane questions for security, such as the name of your least-liked friend in fourth grade. We are serving the wrong masters.
I particularly like his example of car accidents. We're fed the line that autonomous vehicles will dramatically cut the number of accidents on our road, but is that right?
Over 90% of industrial and automobile accidents are blamed on human error with distraction listed as a major cause. Can this be true? Look, if 5% of accidents were caused by human error, I would believe it. But when it is 90%, there must be some other reason, namely, that people are asked to do tasks that people should not be doing. Tasks that violate fundamental human abilities.

Consider the words we use to describe the result: human error, distraction, lack of attention, sloppiness–all negative terms, all implying the inferiority of people. Distraction, in particular, is the byword of the day–responsible for everything from poor interpersonal relationships to car accidents. But what does the term really mean?

It’s a good article, particularly at a time when we’re thinking about robots and artificial intelligence replacing humans in the jobs market. It certainly made me think about my technology choices.

Source: Fast Company

 

Be good for something (quote)

“Aim above morality. Be not simply good, be good for something.”

(Henry David Thoreau)

Work and play (quote)

“A master in the art of living draws no sharp distinction between his work and his play; his labor and his leisure; his mind and his body; his education and his recreation. He hardly knows which is which. He simply pursues his vision of excellence through whatever he is doing, and leaves others to determine whether he is working or playing. To himself, he always appears to be doing both.”

(Lawrence Pearsall Jacks)

Work and play (quote)

“A master in the art of living draws no sharp distinction between his work and his play; his labor and his leisure; his mind and his body; his education and his recreation. He hardly knows which is which. He simply pursues his vision of excellence through whatever he is doing, and leaves others to determine whether he is working or playing. To himself, he always appears to be doing both.”

(Lawrence Pearsall Jacks)

Issue #312: If it's not one thing, it's another

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Break the rules like an artist (quote)

“Learn the rules like a pro, so you can break them like an artist.”

(Pablo Picasso)

On 'radical incompetence'

One of the reasons I’ve retreated from Twitter since May of last year is the rise of angry politics. I can’t pay attention to everything that’s happening all of the time. And I certainly haven’t got the energy to deal with problems that aren’t materially affecting me or the people I care about.

Brexit, then, is a strange one. On the one hand, I participated in a democratic election to elect a government. Subsequently, a government formed from a party I didn’t vote for called a referendum on the United Kingdom’s membership of the European Union. As we all know, the result was close, and based on lies and illegal funding. Nevertheless, perhaps as a citizen I should participate democratically and then get on with my own life.

On the other hand of course, this isn’t politics as usual. There’s been a rise in nationalistic fervour that we haven’t seen since the 1930s. It’s alarming, particularly at a time when smartphones, social media, and the ever-increasing speed of the news cycle make it difficult for citizens to pay sustained attention to anything.

This article in The New York Times zooms out from the particular issues of Trump and Brexit to look at the wider picture. It’s not mentioned specifically in the article, but documentary evidence of struggles around political power and sovereignty goes back at leats to the Magna Carta in England. One way of looking at that is that King John was the Donald Trump of his time, so the barons took power from him.

It’s easy to stand for the opposite of something: you don’t have to do any of the work. All that’s necessary is to point out problems, flaws, and issues with the the person, organisation, or concept that you’re attacking. So demagogues and iconoclasts such as Boris Johnson and Donald Trump, whose lack of a coherent position wouldn’t work at any other time, all of a sudden gain credibility in times of upheaval.

Like so many political metaphors, the distinction between “hard” and “soft” is misleading. Any Brexiteer wanting to perform machismo will reach for the “hard” option. But as has become increasingly plain over the past two years, and especially over recent weeks, nobody has any idea what “hard” Brexit actually means in policy terms. It is not so much hard as abstract. “Soft” Brexit might sound weak or halfhearted, but it is also the only policy proposal that might actually work.

What appear on the surface to be policy disputes over Britain’s relationship with Brussels are actually fundamental conflicts regarding the very nature of political power. In this, the arguments underway inside Britain’s Conservative Party speak of a deeper rift within liberal democracies today, which shows no sign of healing. In conceptual terms, this is a conflict between those who are sympathetic to government and those striving to reassert sovereignty.

I'm writing this on the train home from London. I haven't participated in or seen any of the protests around Trump's visit to the UK. I have, however, seen plenty of people holding placards and banners, obviously on their way to, or from, a rally.

My concern about getting angry in bite-sized chunks on Twitter or reducing your issues with someone like Trump or Johnson to a placard is that you’re playing them at their own game. They’ll win. They thrive on the oxygen of attention. Cut it off and they’ll whither and be forced to slink off to whatever hole they originally crawled from.

A common thread linking “hard” Brexiteers to nationalists across the globe is that they resent the very idea of governing as a complex, modern, fact-based set of activities that requires technical expertise and permanent officials.

[…]

The more extreme fringes of British conservatism have now reached the point that American conservatives first arrived at during the Clinton administration: They are seeking to undermine the very possibility of workable government. For hard-liners such as Jacob Rees-Mogg, it is an article of faith that Britain’s Treasury Department, the Bank of England and Downing Street itself are now conspiring to deny Britain its sovereignty.

What we’re talking about here is ideology. There’s always been a fundamental difference between the left and the right of politics in a way that’s understood enough not to get into here. But issues around sovereignty, nationalism, and self-determinism actually cut across the traditional political spectrum. That’s why, for example, Jeremy Corbyn, leader of the British Labour Party, can oppose the EU for vastly different reasons to Jacob Rees-Mogg, arch-Brexiteer.

I haven’t got the energy to go into it here, but to me the crisis in confidence in expertise comes from a warping of the meritocratic system that was supposed to emancipate the working class, break down class structures, and bring forth a fairer society. What’s actually happened is that the political elites have joined with the wealthy to own the means of cultural reproduction. As a result, no-one now seems to trust them.

What happens if sections of the news media, the political classes and the public insist that only sovereignty matters and that the complexities of governing are a lie invented by liberal elites? For one thing, it gives rise to celebrity populists, personified by Mr. Trump, whose inability to engage patiently or intelligently with policy issues makes it possible to sustain the fantasy that governing is simple. What Mr. Johnson terms the “method” in Mr. Trump’s “madness” is a refusal to listen to inconvenient evidence, of the sort provided by officials and experts.

There have been many calls within my lifetime for a 'new politics'. It's nearly always a futile project, and just means a changing of the faces on our screens while the political elite continue their machinations. I'm not super-hopeful, but I do perhaps wonder whether our new-found connectedness, if mediated by decentralised technologies, could change that?

Source: The New York Times

Populism today (quote)

“When we speak of ‘populism’ today, we sometimes mean nothing more than a politics that is audible as well as intelligible to the man in the street - or, to be precise, the man and woman slumped on their sofa, their attention skipping fitfully from flat-screen TV to laptop to smartphone to tablet and back to television, or the man and woman at work, sitting in front of desktop PCs but mostly exchanging suggestive personal messages on their smartphones.”

(Niall Ferguson)

Blogging in the Fediverse with Write.as

I couldn’t be happier about this news. Write.as is a service that allows you to connect multiple blogs to one online editor. You then compose your post and then decide where to send it.

Matt Baer, the guy behind Write.as, has announced some exciting new functionality:

After much trial and error, I've finished basic ActivityPub support on Write.as! (Though it's not live yet.) I'm very, very excited about reaching this point so I can try out some new ideas.

So far, most developers in the fediverse have been remaking centralized web services with ActivityPub support. There’s PeerTube for video, PixelFed for social photos, Plume or Microblog.pub for blogging, and of course Mastodon and Pleroma for microblogging — among many others. I’ve loved watching the ecosystem grow over the past several months, but I also think more can be done, and getting AP support in Write.as was the first step to making this happen.

Baer references one of his previous posts where, like the main developer of Mastodon, he takes a stand against some things that people have come to expect from centralised services:

If we're going to build the web world we want, we have to constantly evaluate the pieces we bring with us from the old to the new. With each iteration of an idea on the web we need to question the very nature of certain aspects' existence in the first place, and determine whether or not every single old thing unimproved should still be with us. It's the only way we can be sure we're moving — if not in the direction, at least in some direction that will teach us something.
In Baer's case, it's not having public 'likes' and in Mastodon's case it's not providing the ability to quote toots. Either way, I applaud them for taking a stand.

Baer is planning a new product called Read.as:

Today my idea is to split reading and writing across two ActivityPub-enabled products, Write.as and Read.as. The former will stay focused on writing and publishing; AP support will be almost invisible. Blogs can be followed via the web, RSS, email (soon), or ActivityPub-speaking services (for example, I can follow blogs with my Mastodon account, and then or share any posts to my followers there). Then Read.as would be the read-only counterpart; you go there when you want to stare at your screen for a while and read something interesting. It would be minimally social, avoid interrupting your life, and preserve your privacy — just like Write.as.
Great, great news!

Source: Write.as

On living in public

In this post, Austin Kleon, backpedaling a little from the approach he seemed to promote in Show Your Work!, talks about the problems we all face with ‘living in public’.

It seems ridiculous to say, but 2013, the year I wrote the book, was a simpler time. Social media seemed much more benign to me. Back then, the worst I felt social media did was waste your time. Now, the worst social media does is cripple democracy and ruin your soul.
Kleon quotes Warren Ellis, who writes one of my favourite newsletters (his blog is pretty good, too):
You don’t have to live in public on the internet if you don’t want to. Even if you’re a public figure, or micro-famous like me. I don’t follow anyone on my public Instagram account. No shade on those who follow me there, I’m glad you give me your time – but I need to be in my own space to get my shit done. You want a “hack” for handling the internet? Create private social media accounts, follow who you want and sit back and let your bespoke media channels flow to you. These are tools, not requirements. Don’t let them make you miserable. Tune them until they bring you pleasure.
In May 2017, after being on Twitter over a decade, I deleted my Twitter history, and now delete tweets on a weekly basis. Now, I hang out on a social network that I co-own called social.coop and which is powered by a federated, decentralised service called Mastodon.

I still publish my work, including Thought Shrapnel posts, to Twitter, LinkedIn, etc. It’s just not where I spend most of my time. On balance, I’m happier for it.

Source: Austin Kleon

On living in public

In this post, Austin Kleon, backpedaling a little from the approach he seemed to promote in Show Your Work!, talks about the problems we all face with ‘living in public’.

It seems ridiculous to say, but 2013, the year I wrote the book, was a simpler time. Social media seemed much more benign to me. Back then, the worst I felt social media did was waste your time. Now, the worst social media does is cripple democracy and ruin your soul.
Kleon quotes Warren Ellis, who writes one of my favourite newsletters (his blog is pretty good, too):
You don’t have to live in public on the internet if you don’t want to. Even if you’re a public figure, or micro-famous like me. I don’t follow anyone on my public Instagram account. No shade on those who follow me there, I’m glad you give me your time – but I need to be in my own space to get my shit done. You want a “hack” for handling the internet? Create private social media accounts, follow who you want and sit back and let your bespoke media channels flow to you. These are tools, not requirements. Don’t let them make you miserable. Tune them until they bring you pleasure.
In May 2017, after being on Twitter over a decade, I deleted my Twitter history, and now delete tweets on a weekly basis. Now, I hang out on a social network that I co-own called social.coop and which is powered by a federated, decentralised service called Mastodon.

I still publish my work, including Thought Shrapnel posts, to Twitter, LinkedIn, etc. It’s just not where I spend most of my time. On balance, I’m happier for it.

Source: Austin Kleon

Artistic value (quote)

I don’t think there’s an artist of any value who doesn’t doubt what they’re doing.

– Francis Ford Coppola

Artistic value (quote)

I don’t think there’s an artist of any value who doesn’t doubt what they’re doing.

– Francis Ford Coppola

Problems with the present and future of work are of our own making

This is a long essay in which the RSA announces that, along with its partners (one of which, inevitably, is Google) it’s launching the Future Work Centre. I’ve only selected quotations from the first section here.

From autonomous vehicles to cancer-detecting algorithms, and from picking and packing machines to robo-advisory tools used in financial services, every corner of the economy has begun to feel the heat of a new machine age. The RSA uses the term ‘radical technologies’ to describe these innovations, which stretch from the shiny and much talked about, including artificial intelligence and robotics, to the prosaic but equally consequential, such as smartphones and digital platforms.

I highly recommend reading Adam Greenfield's book Radical Technologies: the design of everyday life, if you haven't already. Greenfield isn't beholden to corporate partners, and lets rip.

What is certain is that the world of work will evolve as a direct consequence of the invention and adoption of radical technologies — and in more ways than we might imagine. Alongside eliminating and creating jobs, these innovations will alter how workers are recruited, monitored, organised and paid. Companies like HireVue (video interviewing), Percolata (schedule setting) and Veriato (performance monitoring) are eager to reinvent all aspects of the workplace.

Indeed, and a lot of what's going on is compliance and surveillance of workers smuggled in through the back door while people focus on 'innovation'.

The main problems outlined with the current economy which is being ‘disrupted’ by technology are:

  1. Declining wages (in real terms)
  2. Economic insecurity (gig economy, etc.)
  3. Working conditions
  4. Bullshit jobs
  5. Work-life balance

Taken together, these findings paint a picture of a dysfunctional labour market — a world of work that offers little in the way of material security, let alone satisfaction. But that may be going too far. Overall, most workers enjoy what they do and relish the careers they have established. The British Social Attitudes survey found that twice as many people in 2015 as in 1989 strongly agreed they would enjoy having a job even if their financial circumstances did not require it.

The problem is not with work per se but rather with how it is orchestrated in the modern economy, and how rewards are meted out. As a society we have a vision of what work could and should look like — well paid, protective, meaningful, engaging — but the reality too often falls short.

I doubt the RSA would ever say it without huge caveats, but the problem is neoliberalism. It's all very well looking to the past for examples of technological disruption, but that was qualitatively different from what's going on now. Organisations can run on a skeleton staff and make obscene profits for a very few people.

I feel like warnings such as ‘the robots are coming’ and ‘be careful not to choose an easily-automated occupation!’ are a smokescreen for decisions that people are making about the kind of society they want to live in. It seems like that’s one where most of us (the ‘have nots’) are expendable, while the 0.01% (the ‘haves’) live in historically-unparalleled luxury.

In summary, the lives of workers will be shaped by more technologies than AI and robotics, and in more ways than through the loss of jobs.

Fears surrounding automaton should be taken seriously. Yet anxiety over job losses should not distract us from the subtler impacts of radical technologies, including on recruitment practices, employee monitoring and people’s work-life balance. Nor should we become so fixated on AI and robotics that we lose sight of the conventional technologies bringing about change in the present moment.

Exactly. Let's fix 2018 before we start thinking about 2040, eh?

Source: The RSA

Problems with the present and future of work are of our own making

This is a long essay in which the RSA announces that, along with its partners (one of which, inevitably, is Google) it’s launching the Future Work Centre. I’ve only selected quotations from the first section here.

From autonomous vehicles to cancer-detecting algorithms, and from picking and packing machines to robo-advisory tools used in financial services, every corner of the economy has begun to feel the heat of a new machine age. The RSA uses the term ‘radical technologies’ to describe these innovations, which stretch from the shiny and much talked about, including artificial intelligence and robotics, to the prosaic but equally consequential, such as smartphones and digital platforms.

I highly recommend reading Adam Greenfield's book Radical Technologies: the design of everyday life, if you haven't already. Greenfield isn't beholden to corporate partners, and lets rip.

What is certain is that the world of work will evolve as a direct consequence of the invention and adoption of radical technologies — and in more ways than we might imagine. Alongside eliminating and creating jobs, these innovations will alter how workers are recruited, monitored, organised and paid. Companies like HireVue (video interviewing), Percolata (schedule setting) and Veriato (performance monitoring) are eager to reinvent all aspects of the workplace.

Indeed, and a lot of what's going on is compliance and surveillance of workers smuggled in through the back door while people focus on 'innovation'.

The main problems outlined with the current economy which is being ‘disrupted’ by technology are:

  1. Declining wages (in real terms)
  2. Economic insecurity (gig economy, etc.)
  3. Working conditions
  4. Bullshit jobs
  5. Work-life balance

Taken together, these findings paint a picture of a dysfunctional labour market — a world of work that offers little in the way of material security, let alone satisfaction. But that may be going too far. Overall, most workers enjoy what they do and relish the careers they have established. The British Social Attitudes survey found that twice as many people in 2015 as in 1989 strongly agreed they would enjoy having a job even if their financial circumstances did not require it.

The problem is not with work per se but rather with how it is orchestrated in the modern economy, and how rewards are meted out. As a society we have a vision of what work could and should look like — well paid, protective, meaningful, engaging — but the reality too often falls short.

I doubt the RSA would ever say it without huge caveats, but the problem is neoliberalism. It's all very well looking to the past for examples of technological disruption, but that was qualitatively different from what's going on now. Organisations can run on a skeleton staff and make obscene profits for a very few people.

I feel like warnings such as ‘the robots are coming’ and ‘be careful not to choose an easily-automated occupation!’ are a smokescreen for decisions that people are making about the kind of society they want to live in. It seems like that’s one where most of us (the ‘have nots’) are expendable, while the 0.01% (the ‘haves’) live in historically-unparalleled luxury.

In summary, the lives of workers will be shaped by more technologies than AI and robotics, and in more ways than through the loss of jobs.

Fears surrounding automaton should be taken seriously. Yet anxiety over job losses should not distract us from the subtler impacts of radical technologies, including on recruitment practices, employee monitoring and people’s work-life balance. Nor should we become so fixated on AI and robotics that we lose sight of the conventional technologies bringing about change in the present moment.

Exactly. Let's fix 2018 before we start thinking about 2040, eh?

Source: The RSA

Issue #311: Under canvas

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Wisdom and experience (quote)

“Wisdom comes from experience. Experience is often a result of lack of wisdom.”

(Terry Pratchett)

Shabby ideas and shoddy philosophies (quote)

“If most of us are ashamed of shabby clothes and shoddy furniture, let us be more ashamed of shabby ideas and shoddy philosophies.”

(Albert Einstein)

The dangers of distracted parenting

I usually limit myself to three quotations in posts I write here. I’m going to break that self-imposed rule for this article by Erika Christakis in The Atlantic on parents' screentime.

Christakis points out the good and the bad news:

Yes, parents now have more face time with their children than did almost any parents in history. Despite a dramatic increase in the percentage of women in the workforce, mothers today astoundingly spend more time caring for their children than mothers did in the 1960s. But the engagement between parent and child is increasingly low-quality, even ersatz. Parents are constantly present in their children’s lives physically, but they are less emotionally attuned.
As parents, and in society in general, we're super-hot on limiting kids' screentime, but we don't necessarily apply that to ourselves:
[S]urprisingly little attention is paid to screen use by parents... who now suffer from what the technology expert Linda Stone more than 20 years ago called “continuous partial attention.” This condition is harming not just us, as Stone has argued; it is harming our children. The new parental-interaction style can interrupt an ancient emotional cueing system, whose hallmark is responsive communication, the basis of most human learning. We’re in uncharted territory.
'Continuous partial attention' is the term people tend to use these days instead of 'multitasking'. To my mind it's a better term, as it references the fact that you're not just trying to do different things simultaneously, you're trying to pay attention to them.

I’ve given the example before of my father sitting down to read the newspaper on a Sunday. Is there really much difference to the child, I’ve wondered, between his being hidden behind a broadsheet for an hour, and his scrolling and clicking on a mobile device? In some ways yes, in some ways no.

It has never been easy to balance adults’ and children’s needs, much less their desires, and it’s naive to imagine that children could ever be the unwavering center of parental attention. Parents have always left kids to entertain themselves at times—“messing about in boats,” in a memorable phrase from The Wind in the Willows, or just lounging aimlessly in playpens. In some respects, 21st-century children’s screen time is not very different from the mother’s helpers every generation of adults has relied on to keep children occupied. When parents lack playpens, real or proverbial, mayhem is rarely far behind. Caroline Fraser’s recent biography of Laura Ingalls Wilder, the author of Little House on the Prairie, describes the exceptionally ad hoc parenting style of 19th-century frontier parents, who stashed babies on the open doors of ovens for warmth and otherwise left them vulnerable to “all manner of accidents as their mothers tried to cope with competing responsibilities.” Wilder herself recounted a variety of near-calamities with her young daughter, Rose; at one point she looked up from her chores to see a pair of riding ponies leaping over the toddler’s head.

To me, the difference can be summed up quite easily: our mobile devices are designed to be addictive and capture our full attention, in ways that analogue media and experiences aren't.
Short, deliberate separations can of course be harmless, even healthy, for parent and child alike (especially as children get older and require more independence). But that sort of separation is different from the inattention that occurs when a parent is with a child but communicating through his or her nonengagement that the child is less valuable than an email. A mother telling kids to go out and play, a father saying he needs to concentrate on a chore for the next half hour—these are entirely reasonable responses to the competing demands of adult life. What’s going on today, however, is the rise of unpredictable care, governed by the beeps and enticements of smartphones. We seem to have stumbled into the worst model of parenting imaginable—always present physically, thereby blocking children’s autonomy, yet only fitfully present emotionally.
Physically present but emotionally unavailable. Yes, we need to do better.
Under the circumstances, it’s easier to focus our anxieties on our children’s screen time than to pack up our own devices. I understand this tendency all too well. In addition to my roles as a mother and a foster parent, I am the maternal guardian of a middle-aged, overweight dachshund. Being middle-aged and overweight myself, I’d much rather obsess over my dog’s caloric intake, restricting him to a grim diet of fibrous kibble, than address my own food regimen and relinquish (heaven forbid) my morning cinnamon bun. Psychologically speaking, this is a classic case of projection—the defensive displacement of one’s failings onto relatively blameless others. Where screen time is concerned, most of us need to do a lot less projecting.
Amen to that.

Source: The Atlantic (via Jocelyn K. Glei)

The dangers of distracted parenting

I usually limit myself to three quotations in posts I write here. I’m going to break that self-imposed rule for this article by Erika Christakis in The Atlantic on parents' screentime.

Christakis points out the good and the bad news:

Yes, parents now have more face time with their children than did almost any parents in history. Despite a dramatic increase in the percentage of women in the workforce, mothers today astoundingly spend more time caring for their children than mothers did in the 1960s. But the engagement between parent and child is increasingly low-quality, even ersatz. Parents are constantly present in their children’s lives physically, but they are less emotionally attuned.
As parents, and in society in general, we're super-hot on limiting kids' screentime, but we don't necessarily apply that to ourselves:
[S]urprisingly little attention is paid to screen use by parents... who now suffer from what the technology expert Linda Stone more than 20 years ago called “continuous partial attention.” This condition is harming not just us, as Stone has argued; it is harming our children. The new parental-interaction style can interrupt an ancient emotional cueing system, whose hallmark is responsive communication, the basis of most human learning. We’re in uncharted territory.
'Continuous partial attention' is the term people tend to use these days instead of 'multitasking'. To my mind it's a better term, as it references the fact that you're not just trying to do different things simultaneously, you're trying to pay attention to them.

I’ve given the example before of my father sitting down to read the newspaper on a Sunday. Is there really much difference to the child, I’ve wondered, between his being hidden behind a broadsheet for an hour, and his scrolling and clicking on a mobile device? In some ways yes, in some ways no.

It has never been easy to balance adults’ and children’s needs, much less their desires, and it’s naive to imagine that children could ever be the unwavering center of parental attention. Parents have always left kids to entertain themselves at times—“messing about in boats,” in a memorable phrase from The Wind in the Willows, or just lounging aimlessly in playpens. In some respects, 21st-century children’s screen time is not very different from the mother’s helpers every generation of adults has relied on to keep children occupied. When parents lack playpens, real or proverbial, mayhem is rarely far behind. Caroline Fraser’s recent biography of Laura Ingalls Wilder, the author of Little House on the Prairie, describes the exceptionally ad hoc parenting style of 19th-century frontier parents, who stashed babies on the open doors of ovens for warmth and otherwise left them vulnerable to “all manner of accidents as their mothers tried to cope with competing responsibilities.” Wilder herself recounted a variety of near-calamities with her young daughter, Rose; at one point she looked up from her chores to see a pair of riding ponies leaping over the toddler’s head.

To me, the difference can be summed up quite easily: our mobile devices are designed to be addictive and capture our full attention, in ways that analogue media and experiences aren't.
Short, deliberate separations can of course be harmless, even healthy, for parent and child alike (especially as children get older and require more independence). But that sort of separation is different from the inattention that occurs when a parent is with a child but communicating through his or her nonengagement that the child is less valuable than an email. A mother telling kids to go out and play, a father saying he needs to concentrate on a chore for the next half hour—these are entirely reasonable responses to the competing demands of adult life. What’s going on today, however, is the rise of unpredictable care, governed by the beeps and enticements of smartphones. We seem to have stumbled into the worst model of parenting imaginable—always present physically, thereby blocking children’s autonomy, yet only fitfully present emotionally.
Physically present but emotionally unavailable. Yes, we need to do better.
Under the circumstances, it’s easier to focus our anxieties on our children’s screen time than to pack up our own devices. I understand this tendency all too well. In addition to my roles as a mother and a foster parent, I am the maternal guardian of a middle-aged, overweight dachshund. Being middle-aged and overweight myself, I’d much rather obsess over my dog’s caloric intake, restricting him to a grim diet of fibrous kibble, than address my own food regimen and relinquish (heaven forbid) my morning cinnamon bun. Psychologically speaking, this is a classic case of projection—the defensive displacement of one’s failings onto relatively blameless others. Where screen time is concerned, most of us need to do a lot less projecting.
Amen to that.

Source: The Atlantic (via Jocelyn K. Glei)

Cory Doctorow on the corruption at the heart of Facebook

I like Cory Doctorow. He’s a gifted communicator who wears his heart on his sleeve. In this article, he talks about Facebook and how what it’s wrought is a result of the corruption at its very heart.

It’s great that the privacy-matters message is finally reaching a wider audience, and it’s exciting to think that we’re approaching a tipping point for indifference to privacy and surveillance.

But while the acknowledgment of the problem of Big Tech is most welcome, I am worried that the diagnosis is wrong.

The problem is that we’re confusing automated persuasion with automated targeting. Laughable lies about Brexit, Mexican rapists, and creeping Sharia law didn’t convince otherwise sensible people that up was down and the sky was green.

Rather, the sophisticated targeting systems available through Facebook, Google, Twitter, and other Big Tech ad platforms made it easy to find the racist, xenophobic, fearful, angry people who wanted to believe that foreigners were destroying their country while being bankrolled by George Soros.

So, for example, people seem to think that Facebook advertisement caused people to vote for Trump. As if they were going to vote for someone else, and then changed their mind as a direct result of viewing ads. That’s not how it works.

Companies such as Cambridge Analytica might claim that they can rig elections and change people’s minds, but they’re not actually that sophisticated.

Cambridge Analytica are like stage mentalists: they’re doing something labor-intensive and pretending that it’s something supernatural. A stage mentalist will train for years to learn to quickly memorize a deck of cards and then claim that they can name your card thanks to their psychic powers. You never see the unglamorous, unimpressive memorization practice. Cambridge Analytica uses Facebook to find racist jerks and tell them to vote for Trump and then they claim that they’ve discovered a mystical way to get otherwise sensible people to vote for maniacs.

This isn’t to say that persuasion is impossible. Automated disinformation campaigns can flood the channel with contradictory, seemingly plausible accounts for the current state of affairs, making it hard for a casual observer to make sense of events. Long-term repetition of a consistent narrative, even a manifestly unhinged one, can create doubt and find adherents – think of climate change denial, or George Soros conspiracies, or the anti-vaccine movement.

These are long, slow processes, though, that make tiny changes in public opinion over the course of years, and they work best when there are other conditions that support them – for example, fascist, xenophobic, and nativist movements that are the handmaidens of austerity and privation. When you don’t have enough for a long time, you’re ripe for messages blaming your neighbors for having deprived you of your fair share.

Advertising and influencing works best when you provide a message that people already agree with in a way that they can easily share with others. The ‘long, slow processes’ that Doctorow refers to have been practised offline as well (think of Nazi propaganda, for example). Dark adverts on Facebook are tapping into feelings and reactions that aren’t peculiar to the digital world.

Facebook has thrived by providing ways for people to connect and communicate with one another. Unfortunately, because they’re so focused on profit over people, they’ve done a spectacularly bad job at making sure that the spaces in which people connect are healthy spaces that respect democracy.

There’s an old-fashioned word for this: corruption. In corrupt systems, a few bad actors cost everyone else billions in order to bring in millions – the savings a factory can realize from dumping pollution in the water supply are much smaller than the costs we all bear from being poisoned by effluent. But the costs are widely diffused while the gains are tightly concentrated, so the beneficiaries of corruption can always outspend their victims to stay clear.

Facebook doesn’t have a mind-control problem, it has a corruption problem. Cambridge Analytica didn’t convince decent people to become racists; they convinced racists to become voters.

That last phrase is right on the money.

Source: Locus magazine

Cory Doctorow on the corruption at the heart of Facebook

I like Cory Doctorow. He’s a gifted communicator who wears his heart on his sleeve. In this article, he talks about Facebook and how what it’s wrought is a result of the corruption at its very heart.

It’s great that the privacy-matters message is finally reaching a wider audience, and it’s exciting to think that we’re approaching a tipping point for indifference to privacy and surveillance.

But while the acknowledgment of the problem of Big Tech is most welcome, I am worried that the diagnosis is wrong.

The problem is that we’re confusing automated persuasion with automated targeting. Laughable lies about Brexit, Mexican rapists, and creeping Sharia law didn’t convince otherwise sensible people that up was down and the sky was green.

Rather, the sophisticated targeting systems available through Facebook, Google, Twitter, and other Big Tech ad platforms made it easy to find the racist, xenophobic, fearful, angry people who wanted to believe that foreigners were destroying their country while being bankrolled by George Soros.

So, for example, people seem to think that Facebook advertisement caused people to vote for Trump. As if they were going to vote for someone else, and then changed their mind as a direct result of viewing ads. That’s not how it works.

Companies such as Cambridge Analytica might claim that they can rig elections and change people’s minds, but they’re not actually that sophisticated.

Cambridge Analytica are like stage mentalists: they’re doing something labor-intensive and pretending that it’s something supernatural. A stage mentalist will train for years to learn to quickly memorize a deck of cards and then claim that they can name your card thanks to their psychic powers. You never see the unglamorous, unimpressive memorization practice. Cambridge Analytica uses Facebook to find racist jerks and tell them to vote for Trump and then they claim that they’ve discovered a mystical way to get otherwise sensible people to vote for maniacs.

This isn’t to say that persuasion is impossible. Automated disinformation campaigns can flood the channel with contradictory, seemingly plausible accounts for the current state of affairs, making it hard for a casual observer to make sense of events. Long-term repetition of a consistent narrative, even a manifestly unhinged one, can create doubt and find adherents – think of climate change denial, or George Soros conspiracies, or the anti-vaccine movement.

These are long, slow processes, though, that make tiny changes in public opinion over the course of years, and they work best when there are other conditions that support them – for example, fascist, xenophobic, and nativist movements that are the handmaidens of austerity and privation. When you don’t have enough for a long time, you’re ripe for messages blaming your neighbors for having deprived you of your fair share.

Advertising and influencing works best when you provide a message that people already agree with in a way that they can easily share with others. The ‘long, slow processes’ that Doctorow refers to have been practised offline as well (think of Nazi propaganda, for example). Dark adverts on Facebook are tapping into feelings and reactions that aren’t peculiar to the digital world.

Facebook has thrived by providing ways for people to connect and communicate with one another. Unfortunately, because they’re so focused on profit over people, they’ve done a spectacularly bad job at making sure that the spaces in which people connect are healthy spaces that respect democracy.

There’s an old-fashioned word for this: corruption. In corrupt systems, a few bad actors cost everyone else billions in order to bring in millions – the savings a factory can realize from dumping pollution in the water supply are much smaller than the costs we all bear from being poisoned by effluent. But the costs are widely diffused while the gains are tightly concentrated, so the beneficiaries of corruption can always outspend their victims to stay clear.

Facebook doesn’t have a mind-control problem, it has a corruption problem. Cambridge Analytica didn’t convince decent people to become racists; they convinced racists to become voters.

That last phrase is right on the money.

Source: Locus magazine

On 'unique' organisational cultures

This article on Recode, which accompanies one of their podcast episodes, features some thoughts from Adam Grant, psychologist and management expert. A couple of things he says chime with my experience of going into a lot of organisations as a consultant, too:

“Almost every company I’ve gone into, what I hear is, ‘Our culture is unique!’” Grant said on the latest episode of Recode Decode, hosted by Kara Swisher. “And then I ask, ‘How is it unique?’ and the answers are all the same.”

Exactly. There's only so many ways you can slice and dice hierarchy, so people do exercises around corporate values and mission statements.

“I hear, ‘People really believe in our values and they think that we’re a cause, so we’re so passionate about the mission!’” he added. “Great. So is pretty much every other company. I hear, ‘We give employees unusual flexibility,’ ‘We have all sorts of benefits that no other company offers,’ and ‘We live with integrity in ways that no other company does.’ It’s just the same platitudes over and over.”

If organisations really want to be innovative, they should empower their employees in ways beyond mere words. Perhaps by allowing them to be co-owners of the business, or by devolving power (and budget) to smaller, cross-functional teams?

Another thing that Grant complains about is the idea of ‘cultural fit’. I can see why organisations do this as, after all, you do have to get on and work with the people you’re hiring. However, as he explains, it’s a self-defeating approach:

Startups with a disruptive idea can use “culture fit” to hire a lot of people who all feel passionately about the mission of these potentially world-changing companies, Grant said. But then those people hire even more people who are like them.

“You end up attracting the same kinds of people because culture fit is a proxy for, ‘Are you similar to me? Do I want to hang out with you?’” he said. “So you end up with this nice, homogeneous group of people who fall into groupthink and then it’s easier for them to get disrupted from the outside, and they have trouble innovating and changing.”

I haven't listened to the podcast yet, but the short article is solid stuff.

Recode (via Stowe Boyd)

Issue #310: Moodling about in Barcelona

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Fear (quote)

“One already wet does not fear the rain.” (Turkish proverb)

Reduce your costs, retain your focus

The older I get, the less important I realise things are that I deemed earlier in life. For example, the main thing in life seems to be to find something you can find interesting to work on for a long period of time. That’s unlikely to be a ‘job’ but more like a problem to be solved values to exemplify and share.

Jason Fried writes on his company’s blog about the journey that they’ve taken over the last 19 years. Everyone knows Basecamp because it’s been around for as long as you’ve been on the web.

2018 will be our 19th year in business. That means we’ve survived a couple of major downturns — 2001, and 2008, specifically. I’ve been asked how. It’s simple: It didn’t cost us much to stay in business. In 2001 we had 4 employees. We were competing against companies that had 40, 400, even 4000. We had 4. We made it through, many did not. In 2008 we had around 20. We had millions in revenue coming in, but we still didn’t spend money on marketing, and we still sublet a corner of someone else’s office. Business was amazing, but we continued to keep our costs low. Keeping a handle on your costs must be a habit, not an occasion. Diets don’t work, eating responsibly does.

What is true in business is true in your personal life. I'm writing this out in the garden of our terraced property. It's approximately the size of a postage stamp. No matter, it's big enough for what we need, and living here means my wife doesn't have to work (unless she wants to) and I'm not under pressure to earn some huge salary.

So keep your costs as low as possible. And it’s likely that true number is even lower than you think possible. That’s how you last through the leanest times. The leanest times are often the earliest times, when you don’t have customers yet, when you don’t have revenue yet. Why would you tank your odds of survival by spending money you don’t have on things you don’t need? Beats me, but people do it all the time. ALL THE TIME. Dreaming of all the amazing things you’ll do in year three doesn’t matter if you can’t get past year two.

These days we have huge expectations of what life should give us. The funny thing is that, if you stand back a moment and ask what you actually need, there's never been a time in history when the baseline that society provides has been so high.

We rush around the place trying to be like other people and organisations, when we need to think about what who and what we’re trying to be. The way to ‘win’ at life and business is to still be doing what you enjoy and deem important when everyone else has crashed and burned.

Source: Signal v. Noise

The link between sleep and creativity

I’m a big fan of sleep. Since buying a smartwatch earlier this year, I’ve been wearing it all of the time, including in bed at night. What I’ve found is that I’m actually a good sleeper, regularly sleeping better than 95% of other people who use the same Mi Fit app.

Like most people, after a poor night’s sleep I’m not at my best the next day. This article by Ed Yong in The Atlantic helps explain why.

As you start to fall asleep, you enter non-REM sleep. That includes a light phase that takes up most of the night, and a period of much heavier slumber called slow-wave sleep, or SWS, when millions of neurons fire simultaneously and strongly, like a cellular Greek chorus. “It’s something you don’t see in a wakeful state at all,” says Lewis. “You’re in a deep physiological state of sleep and you’d be unhappy if you were woken up.”

During that state, the brain replays memories. For example, the same neurons that fired when a rat ran through a maze during the day will spontaneously fire while it sleeps at night, in roughly the same order. These reruns help to consolidate and strengthen newly formed memories, integrating them into existing knowledge. But Lewis explains that they also help the brain extract generalities from specifics—an idea that others have also supported.

We’ve known for generations that, if we’ve got a problem to solve or a decision to make, that it’s a good idea to ‘sleep on it’. Science is catching up with folk wisdom.

The other phase of sleep—REM, which stands for rapid eye movement—is very different. That Greek chorus of neurons that sang so synchronously during non-REM sleep descends into a cacophonous din, as various parts of the neocortex become activated, seemingly at random. Meanwhile, a chemical called acetylcholine—the same one that Loewi identified in his sleep-inspired work—floods the brain, disrupting the connection between the hippocampus and the neocortex, and placing both in an especially flexible state, where connections between neurons can be more easily formed, strengthened, or weakened.
The difficulty is that our sleep quality is affected by blue light confusing the brain as to what kind of day it is. That's why we're seeing increasing numbers of devices changing your screen colour towards the red end of the spectrum in the evening. If you have disrupted sleep, you miss out on an important phase of your sleep cycle.
Crucially, they build on one another. The sleeping brain goes through one cycle of non-REM and REM sleep every 90 minutes or so. Over the course of a night—or several nights—the hippocampus and neocortex repeatedly sync up and decouple, and the sequence of abstraction and connection repeats itself. “An analogy would be two researchers who initially work on the same problem together, then go away and each think about it separately, then come back together to work on it further,” Lewis writes.

“The obvious implication is that if you’re working on a difficult problem, allow yourself enough nights of sleep,” she adds. “Particularly if you’re trying to work on something that requires thinking outside the box, maybe don’t do it in too much of a rush.”

As the article states, there’s further research to be done here. But, given that sleep (along with exercise and nutrition) is one of the three ‘pillars’ of productivity, this certainly chimes with my experience.

Source: The Atlantic

Attention scarcity as an existential threat

This post is from Albert Wenger, a partner a New York-based early stage VC firm focused on investing in disruptive networks. It’s taken from his book World After Capital, currently in draft form.

In this section, Wenger is concerned with attention scarcity, which he believes to be both a threat to humanity, and an opportunity for us.

On the threat side, for example, we are not working nearly hard enough on how to recapture CO2 and other greenhouse gases from the atmosphere. Or on monitoring asteroids that could strike earth, and coming up with ways of deflecting them. Or containing the outbreak of the next avian flu: we should have a lot more collective attention dedicated to early detection and coming up with vaccines and treatments.
The reason the world's population is so high is almost entirely due to the technological progress we've made. We're simply better at keeping human beings alive.
On the opportunity side, far too little human attention is spent on environmental cleanup, free educational resources, and basic research (including the foundations of science), to name just a few examples. There are so many opportunities we could dedicate attention to that over time have the potential to dramatically improve quality of life here on Earth not just for humans but also for other species.
Interestingly, he comes up with a theory as to why we haven't heard from any alien species yet:
I am proposing this as a (possibly new) explanation for the Fermi Paradox, which famously asks why we have not yet detected any signs of intelligent life elsewhere in our rather large universe. We now even know that there are plenty of goldilocks planets available that could harbor life forms similar to those on Earth. Maybe what happens is that all civilizations get far enough to where they generate huge amounts of information, but then they get done in by attention scarcity. They collectively take their eye off the ball of progress and are not prepared when something really bad happens such as a global pandemic.
Attention scarcity, then, has the opportunity to become an existential threat to our species. Pay attention to the wrong things and we could either neglect to avoid a disaster, or cause one of our own making.

Source: Continuations

Our irresistible screens of splendour

Apple is touting a new feature in the latest version of iOS that helps you reduce the amount of time you spend on your smartphone. Facebook are doing something similar. As this article in The New York Times notes, that’s no accident:

There’s a reason tech companies are feeling this tension between making phones better and worrying they are already too addictive. We’ve hit what I call Peak Screen.

For much of the last decade, a technology industry ruled by smartphones has pursued a singular goal of completely conquering our eyes. It has given us phones with ever-bigger screens and phones with unbelievable cameras, not to mention virtual reality goggles and several attempts at camera-glasses.

The article even gives the example of Augmented Reality LEGO play sets which actively encourage you to stop building and spend more time on screens!

Tech has now captured pretty much all visual capacity. Americans spend three to four hours a day looking at their phones, and about 11 hours a day looking at screens of any kind.

So tech giants are building the beginning of something new: a less insistently visual tech world, a digital landscape that relies on voice assistants, headphones, watches and other wearables to take some pressure off our eyes.

[...]

Screens are insatiable. At a cognitive level, they are voracious vampires for your attention, and as soon as you look at one, you are basically toast.

It’s not enough to tell people not to do things. Technology can be addictive, just like anything else, so we need to find better ways of achieving similar ends.

But in addition to helping us resist phones, the tech industry will need to come up with other, less immersive ways to interact with digital world. Three technologies may help with this: voice assistants, of which Amazon’s Alexa and Google Assistant are the best, and Apple’s two innovations, AirPods and the Apple Watch.

All of these technologies share a common idea. Without big screens, they are far less immersive than a phone, allowing for quick digital hits: You can buy a movie ticket, add a task to a to-do list, glance at a text message or ask about the weather without going anywhere near your Irresistible Screen of Splendors.

The issue I have is that it's going to take tightly-integrated systems to do this well, at least at first. So the chances are that Apple or Google will create an ecosystem that only works with their products, providing another way to achieve vendor lock-in.

Source: The New York Times

Rethinking hierarchy

This study featured on the blog of the Stanford Graduate School of Business talks about the difference between hierarchical and non-hierarchical structures. It cites work by Lisanne van Bunderen from University of Amsterdam, who found that egalitarianism seemed to lead to better performance:

“The egalitarian teams were more focused on the group because they felt like ‘we’re in the same boat, we have a common fate,’” says van Bunderen. “They were able to work together, while the hierarchical team members felt a need to fend for themselves, likely at the expense of others.”

Context, of course, is vital. One place where hierarchy and a command-and-control approach seems impotant is in high stakes situations such as the battlefield or hospital operating theatres during delicate operations. Lindred Greer, a professor of organizational behavior at Stanford Graduate School of Business, nevertheless believes that, even in these situations, the amount of hierarchy can be reduced:
In some cases, hierarchy is an unavoidable part of the work. Greer is currently studying the interaction between surgeons and nurses, and surgeons lead by necessity. “If you took the surgeon out of the operating room, you would have some issues,” she says. But surgeons’ dominance in the operating room can also be problematic, creating dysfunctional power dynamics. To help solve this problem, Greer believes that the expression of hierarchy can be moderated. That is, surgeons can learn to behave in a way that’s less hierarchical.
While hierarchy is necessary in some situations, what we need is a more fluid approach to organising, as I've written about recently. The article gives the very practical example of Navy SEALs:

Navy SEALS exemplify this idea. Strict hierarchy dominates out in the field: When a leader says go left, they go left. But when the team returns for debrief, “they literally leave their stripes at the door,” says Greer. The hierarchy disappears; nobody is a leader, nobody a follower. “They fluidly shift out of these hierarchical structures,” she says. “It would be great if business leaders could do this too: Shift from top-down command to a position in which everyone has a say.” Importantly, she reiterated, this kind of change is not only about keeping employees happy, but also about enhancing performance and benefiting the bottom line.

Like the article's author, I'm still looking for something that's going to gain more traction than Holacracy. Perhaps the sociocratic approach could work well, but does require people to be inducted into it. After all, hierarchy and capitalism is what we're born into these days. It feels 'natural' to people.

Source: Stanford Graduate School of Business (via Stowe Boyd)

Freedom (quote)

“No man is free who is not master of himself.” (Epictetus)

Issue #309: Different

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Crawling before you walk

Alberto Corado, Moodle’s UX Lead, sent me an article by Rebecca Guthrie entitled Crawl, Walk, Run. It’s contains good, concise, advice in three parts:

Crawl. Do things that don’t scale at the beginning. Talk to 50 potential customers, listen, discover pain points, and then begin to form a product to solve that pain. Use this feedback to develop your MVP. Don’t fall in love with your solution. Fall in love with their problem. I’ve mentioned this before, read Lean Startup.

This is what we've been doing so far with the MoodleNet project. I must have spoken to around 50 people all told, running the idea past them, getting their feedback, and iterating towards the prototype we came up with during the design sprint. I'd link to the records I have of those conversations, but I had to take down my notes on the wiki, along with community call stuff, due to GDPR.

Walk. Create mock-ups. Start to develop your product. Go back to your early potential customers and ask them if your MVP (or mockups) solve their problem. Pre-sell it. If you really are solving a problem, they will pay you for the software. Don’t give it away for free, but do give them an incentive to participate. If you can’t get one person to buy before it is ready, do not move onto the next stage with building your product. Or, you will launch to crickets. Go back to your mock-ups and keep going until you create something at least one person wants to buy. The one person should not be a family member or acquaintance. Once you have the pre-sale(s), conduct a Beta round where those paying users test out what you’ve built. Stay in Beta until you can leverage testimonials from your users. Leverage this time to plan for what comes next, an influx of customers based of your client’s testimonials.

I'm not sure this completely applies to what we're doing with MoodleNet. It's effectively a version of what Tim Ferriss outlines in The 4-Hour Work Week when he suggests creating a page for a product that doesn't exist and taking sign-ups after someone presses the 'Buy' button.

What I think we can do is create clickable prototypes using something like Adobe XD, which allows users to give feedback on specific features. We can use this UX feedback to create an approach ready for when the technical architecture is built.

Run. Once your Beta is proven, RUN! Run as fast as you can and get Sales. The founder (or one of the founders) must be willing to hustle for sales. I recommend downloading the startup course from Close.io. Steli gives amazing advice.

While MoodleNet needs to be sustainable, this isn't about huge sales growth but about serving educators. We do want as many people to use the platform as possible, and we want to grow in a way where there's a feedback loop. So we may end up doing something like giving our initial cohort a certain number of invites to encourage their friends/colleagues to join.

Food for thought, certainly.

Source: Rebecca Guthrie

Issue 308: World Cup(cake)

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Higher Education and blockchain

I’ve said it before, and I’ll say it again: the most useful applications of blockchain technologies are incredibly boring. That goes in education, too.

This post by Chris Fellingham considers blockchain in the context of Higher Education, and in particular credentialing:

The short pitch is that as jobs and education go digital, we need digital credentials for our education and those need to be trustworthy and automisable. Decentralised trust systems may well be the future but I don’t see that it solves a core problem. Namely that the main premium market for Higher Education Edtech is geared twards graduates in developed countries and that market — does not have a problem of trust in its credentials — it has a problem of credibility in its courses. People don’t know what it means to have done a MOOC/Specialization/MicroMasters in X which undermines the market system for it. Shoring up the credential is a second order problem to proving the intrinsic value of the course itself.
"Decentralised trust systems" is what blockchain aficionados refer to, but what they actually mean is removing trust from the equation. So, in hiring decisions, for example, trust is removed from the equation in favour of cryptographic proof.

Fellingham mentions someone called ‘Smolenski’ who, after a little bit of digging, must be Natalie Smolenski, who works for Learning Machine. That organisation is a driving force, with MIT, behind the Blockcerts standard for blockchain-based digital credentialing.

Smolenski however, is a believer, and in numerous elegant essays has argued blockchain is the latest paradigm shift in trust-based technologies. The thesis puts trust based technologies as a central driver of human development. Kinship was the first ‘trust technology’, followed by language and cultural development. Things really got going with organised religion which was the early modern driver — enabling proto-legal systems and financial systems to emerge. Total strangers could now conduct economic transactions by putting their trust in local laws (a mutually understand system for transactions) in the knowledge that it would be enforced by a trusted third party — the state. Out of this emerged market economies and currencies.

Like Fellingham, I'm not particularly enamoured with this teleological 'grand narrative' approach to history, of which blockchain believers do tend to be overly-fond. I'm pretty sure that human history hasn't been 'building' in any way towards anything, particularly something that involves less trust between human beings.

Blockchain at this moment is a kind of religion. It’s based on a hope of things to come:

Blockchain — be it in credential or currency form ...could well be a major — if not paradigmatic technology — but it has its own logic and fundamentally suits those who use it best — much as social networks turned out to be fertile grounds for fake news. For that reason alone, we should be far more cautious about a shift to blockchain in Higher Education — lest like fake news — it takes an imperfect system and makes it worse.

Indeed. Who on earth would want wants to hard code the way things are right now in Higher Education? If your answer is 'blockchain-based credentials', then I'm not sure you really understand what the question is.

Source: Chris Fellingham (via Stephen Downes)

On 'instagrammability'

“We shape our tools and thereafter our tools shape us.” (John M. Culkin)
I choose not to use or link to Facebook services, and that includes Instagram and WhatsApp. I do, however, recognise the huge power that Instagram has over some people's lives which, of course, trickles down to businesses and those looking to "live the Instagram lifestyle".

The design blog Dezeen picks up on a report from an Australian firm of architects, demonstrating that ‘Instagrammable moments’ are now part of their brief.

The Six Universal Truths of Influence

I’m all for user stories and creating personas but one case looks like grounds for divorce, Bob is seen as the servant of Michelle, who wants to be photographed doing things she’s seen others doing

One case study features Bob and Michelle, a couple with "very different ideas about what their holiday should look like."

While Bob wants to surf, drink beer and spend quality time with Michelle, she wants to “be pampered and live the Instagram life of fresh coconuts and lounging by the pool.”

In response to this type of user, designers should focus on providing what Michelle wants, since “Bob’s main job this holiday is to take pictures of Michelle.”

“Michelle wants pictures of herself in the pool, of bright colours, and of fresh attractive food,” the report says. “You’ll also find her taking pictures of remarkable indoor and outdoor artwork like murals or inspirational signage."

It’s easy to roll your eyes at this (and trust me, mine are almost rotating out of their sockets) but the historian in me finds this fascinating. I wonder if future generations will realise that architectural details were a result of photos been taken for a particular service?

Other designers taking users' Instagram preferences into account include Coordination Asia, who recent project for restaurant chain Gaga in Shanghai has been optimised so design elements fit in a photo frame and maximise the potential for selfies.

Instagram co-founder Mike Krieger told Dezeen that he had noticed that the platform was influencing interior design.

Of course, architects and designers have to start somewhere and perhaps ‘instagrammability’ is a useful creative constraint.

"Hopefully it leads to a creative spark and things feeling different over time," [Krieger] said. "I think a bad effect would be that same definition of instagrammability in every single space. But instead, if you can make it yours, it can add something to the building."

Instagram was placed at number 66 in the latest Dezeen Hot List of the most newsworthy forces in world design.

Now that I’ve read this, I’ll be noticing this everywhere, no doubt.

Source: Dezeen

F*** off Google

This is interesting, given that Google was welcomed with open arms in London:

Google plans to implant a "Google Campus" in Kreuzberg, Berlin. We, as a decentralized network of people are committed to not letting our beloved city be taken over by this law- and tax-evading company that is building a dystopian future. Let's kick Google out of our neighborhood and lives!
What I find interesting is that not only are people organising against Google, they've also got a wiki to inform people and help wean them off Google services.

The problem that I have with ‘replacing’ Google services is that it’s usually non-trivial for less technical users to achieve. As the authors of the wiki point out:

It is though dangerous to think in terms of "alternatives", like the goal was to reach equivalence to what Google offers (and risk to always lag behind). In reality what we want is *better* services than the ones of Google, because they would rest on *better* principles, such as decentralization/distribution of services, end-to-end encryption, uncompromising free/libre software, etc.

While presenting these “alternatives” or “replacements” here, we must keep in mind that the true goal is to achieve proper distribution/decentralization of information and communication, and empower people to understand and control where their information goes.

The two biggest problems with the project of removing big companies such as Google from our lives, are: (i) using web services is a social thing, and (ii) they provide such high quality services for so little financial cost.

Whether you’re using a social network to connect with friends or working with colleagues on a collaborative document, your choices aren’t solely yours. We negotiate the topography of the web at the same time as weaving the social fabric of society. It’s not enough to give people alternatives, there has to be some leadership to go with it.

Source: Fuck off Google wiki

 

Seed of good (quote)

“Search for the seed of good in every adversity. Master that principle and you will own a precious shield that will guard you well through all the darkest valleys you must traverse. Stars may be seen from the bottom of a deep well, when they cannot be discerned from the mountaintop. So will you learn things in adversity that you would never have discovered without trouble. There is always a seed of good. Find it and prosper.”

(Og Mandino)

Where memes come from

In my TEDx talk six years ago, I explained how the understanding and remixing of memes was a great way to develop digital literacies. At that time, they were beginning to be used in advertisements. Now, as we saw with Brexit and the most recent US Presidential election, they’ve become weaponised.

This article in the MIT Technology Review references one of my favourite websites, knowyourmeme.com, which tracks the origin and influence of various memes across the web. Researchers have taken 700,000 images from this site and used an algorithm to track their spread and development. In addition, they gathered 100 million images from other sources.

Spotting visually similar images is relatively straightforward with a technique known as perceptual hashing, or pHashing. This uses an algorithm to convert an image into a set of vectors that describe it in numbers. Visually similar images have similar sets of vectors or pHashes.

The team let their algorithm loose on a database of over 100 million images gathered from communities known to generate memes, such as Reddit and its subgroup The_Donald, Twitter, 4chan’s politically incorrect forum known as /pol/, and a relatively new social network called Gab that was set up to accommodate users who had been banned from other communities.

Whereas some things ‘go viral’ by accident and catch the original author(s) off-guard, some communities are very good at making memes that spread quickly.

Two relatively small communities stand out as being particularly effective at spreading memes. “We find that /pol/ substantially influences the meme ecosystem by posting a large number of memes, while The Donald is the most efficient community in pushing memes to both fringe and mainstream Web communities,” say Stringhini and co.

They also point out that “/pol/ and Gab share hateful and racist memes at a higher rate than mainstream communities,” including large numbers of anti-Semitic and pro-Nazi memes.

Seemingly neutral memes can also be “weaponized” by mixing them with other messages. For example, the “Pepe the Frog” meme has been used in this way to create politically active, racist, and anti-Semitic messages.

It turns out that, just like in evolutionary biology, creating a large number of variants is likely to lead to an optimal solution for a given environment.

The researchers, who have made their technique available to others to promote further analysis, are even able to throw light on the question of why some memes spread widely while others quickly die away. “One of the key components to ensuring they are disseminated is ensuring that new ‘offspring’ are continuously produced,” they say.

That immediately suggests a strategy for anybody wanting to become more influential: set up a meme factory that produces large numbers of variants of other memes. Every now and again, this process is bound to produce a hit.

For any evolutionary biologist, that may sound familiar. Indeed, it’s not hard to imagine a process that treats pHashes like genomes and allows them to evolve through mutation, reproduction, and selection.

As the article states, right now it’s humans creating these memes. However, it won’t be long until we have machines doing this automatically. After all, it’s been five years since the controversy about the algorithmically-created “Keep Calm and…” t-shirts for sale on Amazon.

It’s an interesting space to watch, particularly for those interested in digital literacies (and democracy).

Source: MIT Technology Review

The seductive logic of technology (quote)

"Whenever we get swept up in the self-reinforcing momentum and seductive logic of some new technology, we forget to ask what else it might be doing, how else it might be working, and who ultimately benefits most from its appearance. Why time has been diced into the segments between notifications, why we feel so inadequate to the parade of images that reach us through our devices, just why it is that we feel so often feel hollow and spent. What might connect our choices and the processes that are stripping the planet, filthing the atmosphere, and impoverishing human and nonhuman lives beyond number. Whether and in what way our actions might be laying the groundwork for an oppression that is grimmer yet and still more total. And finally we forget to ask whether, in our aspiration to overcome the human, we are discarding a gift we already have at hand and barely know what to do with."

(Adam Greenfield)

Inequality, anarchy, and the course of human history

Sometimes I’m reminded of the fact that I haven’t checked in with someone’s worth for a few weeks, months, or even years. I’m continually impressed with the work of my near-namesake Dougald Hine. I hope to meet him in person one day.

Going back through his recent work led me to a long article in Eurozine by David Graeber and David Wengrow about how we tend to frame history incorrectly.

Overwhelming evidence from archaeology, anthropology, and kindred disciplines is beginning to give us a fairly clear idea of what the last 40,000 years of human history really looked like, and in almost no way does it resemble the conventional narrative. Our species did not, in fact, spend most of its history in tiny bands; agriculture did not mark an irreversible threshold in social evolution; the first cities were often robustly egalitarian. Still, even as researchers have gradually come to a consensus on such questions, they remain strangely reluctant to announce their findings to the public­ – or even scholars in other disciplines – let alone reflect on the larger political implications. As a result, those writers who are reflecting on the ‘big questions’ of human history – Jared Diamond, Francis Fukuyama, Ian Morris, and others – still take Rousseau’s question (‘what is the origin of social inequality?’) as their starting point, and assume the larger story will begin with some kind of fall from primordial innocence.
Graeber and Wengrow essentially argue that most people start from the assumption that we have a choice between a life that is 'nasty, brutish, and short' (i.e. most of human history) or one that is more civilised (i.e. today). If we want the latter, we have to put up with inequality.
‘Inequality’ is a way of framing social problems appropriate to technocratic reformers, the kind of people who assume from the outset that any real vision of social transformation has long since been taken off the political table. It allows one to tinker with the numbers, argue about Gini coefficients and thresholds of dysfunction, readjust tax regimes or social welfare mechanisms, even shock the public with figures showing just how bad things have become (‘can you imagine? 0.1% of the world’s population controls over 50% of the wealth!’), all without addressing any of the factors that people actually object to about such ‘unequal’ social arrangements: for instance, that some manage to turn their wealth into power over others; or that other people end up being told their needs are not important, and their lives have no intrinsic worth. The latter, we are supposed to believe, is just the inevitable effect of inequality, and inequality, the inevitable result of living in any large, complex, urban, technologically sophisticated society.
But inequality is not the inevitable result of living in a civilised society, as they point out with some in-depth examples. I haven't got space to go through them here, but suffice to say that it seems a classic case of historians cherry-picking their evidence.
As Claude Lévi-Strauss often pointed out, early Homo sapiens were not just physically the same as modern humans, they were our intellectual peers as well. In fact, most were probably more conscious of society’s potential than people generally are today, switching back and forth between different forms of organization every year. Rather than idling in some primordial innocence, until the genie of inequality was somehow uncorked, our prehistoric ancestors seem to have successfully opened and shut the bottle on a regular basis, confining inequality to ritual costume dramas, constructing gods and kingdoms as they did their monuments, then cheerfully disassembling them once again.

If so, then the real question is not ‘what are the origins of social inequality?’, but, having lived so much of our history moving back and forth between different political systems, ‘how did we get so stuck?’

Definitely worth a read, particularly if you think that ‘anarchy’ is the opposite of ‘civilisation’.

Source: Eurozine (via Dougald Hine)


Image CC BY-NC-SA xina

Issue #307: Home on the range

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Mediocrity (quote)

“You needn’t settle for a mediocre life just because the people around you did.”

(Joshua Fields Millburn)

Git yourself off that platform!

This week, tens of thousands of open source projects migrated their codebase away from GitHub to alternatives such as GitLab. Why? Because Microsoft announced that they’ve bought GitHub for $7.5 billion.

For those who don’t spend time in the heady world of software and web development, that sounds like a lot of money for something with a silly name. It will hopefully make things a little clearer to explain that Git is described by Wikipedia in the following way:

Git is a version control system for tracking changes in computer files and coordinating work on those files among multiple people. It is primarily used for source code management in software development, but it can be used to keep track of changes in any set of files. As a distributed revision control system it is aimed at speed, data integrity, and support for distributed, non-linear workflows.
Despite GitHub not being open source, it did, until this week host most of the world's open source projects. You can currently use GitHub for free if your project's code is public, and the company sells the ability to create private repositories. As far as I'm aware it's never turned a profit.

I’ve seen lots of reactions to the Microsoft acquistion news, but one of the more insightful posts comes from Louis-Philippe Véronneau. Like me, he doesn’t trust Microsoft at all.

Some people might be fine with Microsoft's takeover, but to me it's the straw that breaks the camel's back. For a few years now, MS has been running a large marketing campaign on how they love Linux and suddenly decided to embrace Free Software in all of its forms. More like MS BS to me.

Let us take a moment to remind ourselves that:

  • Windows is still a huge proprietary monster that rips billions of people from their privacy and rights every day.
  • Microsoft is known for spreading FUD about "the dangers" of Free Software in order to keep governments and schools from dropping Windows in favor of FOSS.
  • To secure their monopoly, Microsoft hooks up kids on Windows by giving out "free" licences to primary schools around the world. Drug dealers use the same tactics and give out free samples to secure new clients.
  • Microsoft's Azure platform - even though it can run Linux VMs - is still a giant proprietary hypervisor.
Yep.

I’m thankful that we’re now starting the MoodleNet project in a post-GDPR and post-GitHub world. We’ll be using GitLab — initially via their hosted service, but longer-term as a self-hosted solution — and as many open-source products and services as possible.

Interestingly, Véronneau notes that you can use Debian’s infrastructure (terms) or RiseUp’s infrastructure (terms) if your project aligns with their ethos.

Source: Louis-Philippe Véronneau

All the questions (quote)

“One who knows all the answers has not been asked all the questions.”

(Confucius)

Blockchain was just a stepping stone

I’m reading Adam Greenfield’s excellent book Radical Technologies: the design of everyday life at the moment. He says:

And for those of us who are motivated by commitment to a specifically participatory politics of the commons, it’s not at all clear that any blockchain-based infrastructure can support the kind of flexible assemblies we imagine. I myself come from an intellectual tradition that insists that any appearance of the word “potential” needs to be greeted with skepticism. There is no such thing as potential, in this view: there are merely states of a system that have historically been enacted, and those that have not yet been enacted. The only way to assess whether a system is capable of assuming a given state is to do the work of enacting it.  
Back in 2015, I wrote about the potential of badges and blockchain. However, these days I'm more likely to agree that's it's a futuristic integrity wand.

The problem with blockchain technologies is that they tend to all get lumped together as if they’re one thing. For example, some use blockchain technologies to prop-up neoliberalism, whereas others are seeking to use it to destroy it.

As part of my research for a presentation I gave in Barcelona last year about decentralised technologies, I came across MaidSafe (“the world’s first autonomous data network”). I admit to be on the edges of my understanding here, but the idea is that the SAFE network can safely store data in an autonomous, decentralised way.

Last week, MaidSafe announced a new protocol called PARSEC (Protocol for Asynchronous, Reliable, Secure and Efficient Consensus). It solves the Byzantine General’s problem without recourse to the existing blockchain approach.

PARSEC solves a well-known problem in decentralised, distributed computer networks: how can individual computers (nodes) in a system reliably communicate truths (in other words, events that have taken place on the network) to each other where a proportion of the nodes are malicious (Byzantine) and looking to disrupt the system. Or to put it another way: how can a group of computers agree on which transactions have correctly taken place and in which order?

This protocol is GPL v3 licensed, meaning that it is "free for anyone to build upon and likely prove to be of immense value to other decentralised projects facing similar challenges". The Bitcoin blockchain network is S-L-O-W and is getting slower. It's also steadily pushing up the computing power required to achieve consensus across the network, meaning that a huge amount of electricity is being used worldwide. This is bad for our planet.
If you’re building a secure, autonomous, decentralised data and communications network for the world like we are with the SAFE Network, then the limitations of blockchain technology when it comes to throughput (transactions-per-second), ever-increasing storage challenges and lack of encryption are all insurmountable problems for any system that seeks to build a project of this magnitude.

[…]

So despite being big fans of blockchain technology for many reasons here at MaidSafe, the reality is that the data and communications networks of the future will see millions or even billions of transactions per second taking place. No matter which type of blockchain implementation you take — tweaking the quantity and distribution of nodes across the network or how many people are in control of these across a variety of locations — at the end of the day, the blockchain itself remains, by definition, a single centralised record. And for the use cases that we’re working on, blockchain technology comes with limitations of transactions-per-second that simply makes that sort of centralisation unworkable.

I confess to not having watched the hour-long YouTube video embedded in the post but, if PARSEC works, it’s another step towards a post-nation state world — for better or worse.

Source: MaidSafe blog

Living with anxiety

It’s taken me a long time to admit it to myself (and my wife) but while I don’t currently suffer from depression, I do live with a low-level general background anxiety that seems to have developed during my adult life.

Wil Wheaton, “actor, blogger, voice actor and writer” and all-round darling of the internet has written in the last few days about his struggles with mental health. My experiences aren’t as extreme as his — I’ve never had panic attacks, and being based from home has made my working life more manageable — but I do relate.

This, in particular, resonated with me from what Wheaton had to say:

One of the many delightful things about having Depression and Anxiety is occasionally and unexpectedly feeling like the whole goddamn world is a heavy lead blanket, like that thing they put on your chest at the dentist when you get x-rays, and it’s been dropped around your entire existence without your consent.
The smallest things feel like insurmountable obstacles. One day you're dealing with people and projects across several timezones like an absolute boss, the next day just going to buy a loaf of bread at the local shop feels like a a huge achievement.

We like to think we can control everything in our lives. We can’t.

I think it was then, at about 34 years-old, that I realized that Mental Illness is not weakness. It’s just an illness. I mean, it’s right there in the name “Mental ILLNESS” so it shouldn’t have been the revelation that it was, but when the part of our bodies that is responsible for how we perceive the world and ourselves is the same part of our body that is sick, it can be difficult to find objectivity or perspective.

I'm physically strong: I run, swim, and go to the gym. I (mostly!) eat the right things. My sleep routine is healthy. My family is happy and I feel loved. I've found self-medicating with L-Theanine and high doses of Vitamin D helpful. All of this means I've managed to minimise my anxiety to the greatest extent possible.

And yet, out of nowhere, a couple of times a month come waves of feelings that I can’t quite describe. They loom. Everything is not right with the world. It makes no sense to say that they don’t have a particular object or focus, but they really don’t. I can’t put my finger on them or turn what it feels like into words.

Wheaton suggests that often the things we don’t feel like doing in these situations are exactly the things we need to do:

Give yourself permission to acknowledge that you’re feeling terrible (or bad, or whatever it is you are feeling), and then do a little thing, just one single thing, that you probably don’t feel like doing, and I PROMISE you it will help. Some of those things are:

  • Take a shower.
  • Eat a nutritious meal.
  • Take a walk outside (even if it’s literally to the corner and back).
  • Do something — throw a ball, play tug of war, give belly rubs — with a dog. Just about any activity with my dogs, even if it’s just a snuggle on the couch for a few minutes, helps me.
  • Do five minutes of yoga stretching.
  • Listen to a guided meditation and follow along as best as you can.
For me, going for a run or playing with my children usually helps enormously. Anything that helps put things into perspective.

What I really appreciate in Wheaton’s article, which was an address he gave to NAMI (the American National Alliance on Mental Illness), was that he focused on the experience of undiagnosed children. It’s hard enough as an adult to realise what’s going on, so for children it must be pretty terrible.

If you’re reading this and suffer from anxiety and/or depression, let’s remember it’s 2018. It’s time to open up about all this stuff. And, as Wheaton reminds us, let’s talk to our children about this, too. The chances are that what you’re living with is genetic, so your kids will also have to deal with this at some point.

Source: Wil Wheaton

"You’re either a leader everywhere or nowhere"

I confess to not have heard of Abby Wambach, a recently-retired US soccer player, until Laura Hilliger brought her to my attention in the form of Wambach’s commencement speech to the graduates of Barnard College.

The whole thing is a fantastic call to action, particularly for women, but I wanted to call out a couple of bits in particular:

If you’re not a leader on the bench, don’t call yourself a leader on the field. You’re either a leader everywhere or nowhere.
People either look to you for guidance, or they don't. You're either the kind of person that steps up when required, or you don't. Fortunately, I had a great role model in this regard in the shape of my father. He perhaps encouraged me a little too much to be a leader, but his actions, particularly when I was younger, spoke louder than his words.

You can’t be a leader at work without being a leader at home. And by ‘leader’ I don’t think Wambach is talking about ‘bossing’ everyone, but about stepping up, being counted, and supporting/representing others.

She also writes:

As you leave here today and everyday going forward: Don’t just ask yourself, “What do I want to do?” Ask yourself: “WHO do I want to be?” Because the most important thing I've learned is that what you do will never define you. Who you are always will.
Absolutely! Decide on your values and live them. I find reading Aristotle useful in this regard, particularly his views on Eudaimonia. Choose what you stand for, and articulate the way you'd like to be. Then seek out opportunities that chime with that.

Source: Barnard College (via Freshly Brewed Thoughts)

Systems change

Over the last 15 years that I’ve been in the workplace, I’ve worked in a variety of organisations. One thing I’ve found is that those that are poor at change management are sub-standard in other ways. That makes sense, of course, because life = change.

There’s a whole host of ways to understand change within organisations. Some people seem to think that applying the same template everywhere leads to good outcomes. They’re often management consultants. Others think that every context is so different that you just have to go with your gut.

I’m of the opinion that there are heuristics we can use to make our lives easier. Yes, every situation and every organisation is different, but that doesn’t mean we can’t apply some rules of thumb. That’s why I like this ‘Managing Complex Change Model’ from Lippitt (1987), which I discovered by going down a rabbithole on a blog post from Tom Critchlow to a blog called ‘Intense Minimalism’.

The diagram, included above is commented upon by

  • Confusion → lack of Vision: note that this can be a proper lack of vision, or the lack of understanding of that vision, often due to poor communication and syncrhonization [sic] of the people involved.
  • Anxiety → lack of Skills: this means that the people involved need to have the ability to do the transformation itself and even more importantly to be skilled enough to thrive once the transformation is completed.
  • Resistance → lack of Incentives: incentives are important as people tend to have a big inertia to change, not just for fear generated by the unknown, but also because changing takes energy and as such there needs to be a way to offset that effort.
  • Frustration → lack of Resources: sometimes change requires very little in terms of practical resources, but a lot in terms of time of the individuals involved (i.e. to learn a new way to do things), lacking resources will make progress very slow and it’s very frustrating to see that everything is aligned and ready, but doesn’t progress.
  • False Starts → lack of Action Plan: action plans don’t have to be too complicated, as small transformative changes can be done with little structure, yet, structure has to be there. For example it’s very useful to have one person to lead the charge, and everyone else agreeing they are the right person to make things happen.
I'd perhaps use different words, as anxiety can be cause by a lot more than not having the skills within your team. But, otherwise, I think it's a solid overview and good reminder of the fundamental building blocks to system change.

Source: Intense Minimalism (via Tom Critchlow)

Finding friends and family without smartphones, maps, or GPS

When I was four years old we moved to the North East of England. Soon after, my parents took my grandmother, younger sister (still in a pushchair) and me to the Quayside market in Newcastle-upon-Tyne.

There’s still some disagreement as to how exactly it happened, but after buying a toy monkey that wrapped around my neck using velcro, I got lost. It’s a long time ago, but I can vaguely remember my decision that, if I couldn’t find my parents or grandmother, I’d probably better head back to the car. So I did.

45 minutes later, and after the police had been called, my parents found me and my monkey sitting on the bonnet of our family car. I can still remember the registration number of that orange Ford Escort: MAT 474 V.

Now, 33 years later, we’re still not great at ensuring children don’t get lost. Yes, we have more of a culture of ensuring children don’t go out of our sight, and give kids smartphones at increasingly-young ages, but we can do much better.

That’s why I thought this Lynq tracker, currently being crowdfunded via Indiegogo was such a great idea. You can get the gist by watching the promo video:

youtu.be/eLKimNWfw…

Our family is off for two weeks around Europe this summer. While we’ve been a couple of times before, both involved taking our car and camping. This time, we’re interrailing and Airbnbing our way around, which increases the risk that one of our children gets lost.

Lync looks really simple and effective to use, but isn’t going to be shipping until November, — otherwise I would have backed this in an instant.

Source: The Verge

Issue #306: Bachelor lifestyle

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Why NASA is better than Facebook at writing software

Facebook’s motto, until recently, was “move fast and break things”. This chimed with a wider Silicon Valley brogrammer mentality of “f*ck it, ship it”.

NASA’s approach, as this (long-ish) Fast Company article explains, couldn’t be more different to the Silicon Valley narrative. The author, Charles Fishman, explains that the group who write the software for space shuttles are exceptional at what they do. And they don’t even start writing code until they’ve got a complete plan in place.

This software is the work of 260 women and men based in an anonymous office building across the street from the Johnson Space Center in Clear Lake, Texas, southeast of Houston. They work for the “on-board shuttle group,” a branch of Lockheed Martin Corps space mission systems division, and their prowess is world renowned: the shuttle software group is one of just four outfits in the world to win the coveted Level 5 ranking of the federal governments Software Engineering Institute (SEI) a measure of the sophistication and reliability of the way they do their work. In fact, the SEI based it standards in part from watching the on-board shuttle group do its work.
There's an obvious impact, both in terms of financial and human cost, if something goes wrong with a shuttle. Imagine if we had these kinds of standards for the impact of social networks on the psychological health of citizens and democratic health of nations!
NASA knows how good the software has to be. Before every flight, Ted Keller, the senior technical manager of the on-board shuttle group, flies to Florida where he signs a document certifying that the software will not endanger the shuttle. If Keller can’t go, a formal line of succession dictates who can sign in his place.

Bill Pate, who’s worked on the space flight software over the last 22 years, [/url]says the group understands the stakes: “If the software isn’t perfect, some of the people we go to meetings with might die.

Software powers everything. It’s in your watch, your television, and your car. Yet the quality of most software is pretty poor.

“It’s like pre-Sumerian civilization,” says Brad Cox, who wrote the software for Steve Jobs NeXT computer and is a professor at George Mason University. “The way we build software is in the hunter-gatherer stage.”

John Munson, a software engineer and professor of computer science at the University of Idaho, is not quite so generous. “Cave art,” he says. “It’s primitive. We supposedly teach computer science. There’s no science here at all.”

The NASA team can sum-up their process in four propositions:

  1. The product is only as good as the plan for the product.
  2. The best teamwork is a healthy rivalry.
  3. The database is the software base.
  4. Don’t just fix the mistakes — fix whatever permitted the mistake in the first place.
They don't pull all-nighters. They don't switch to the latest JavaScript library because it's all over Hacker News. Everything is documented, and genealogy of the whole code is available to everyone working on it.
The most important things the shuttle group does — carefully planning the software in advance, writing no code until the design is complete, making no changes without supporting blueprints, keeping a completely accurate record of the code — are not expensive. The process isn’t even rocket science. Its standard practice in almost every engineering discipline except software engineering.
I'm going to be bearing this in mind as we build MoodleNet. We'll have to be a bit more agile than NASA, of course. But planning and process is important stuff.

 

Source: Fast Company

The best teams are cognitively diverse and psychologically safe

I’ve written about this before, but this HBR article explains that successful teams require both psychological safety and cognitive diversity. Psychological safety is particularly important, I think, for remote workers:

Psychological safety is the belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes. It is a dynamic, emergent property of interaction and can be destroyed in an instant with an ill-timed sigh. Without behaviors that create and maintain a level of psychological safety in a group, people do not fully contribute — and when they don’t, the power of cognitive diversity is left unrealized. Furthermore, anxiety rises and defensive behavior prevails.
If you look at the various quadrants in the header image, taken from the HBR article, then it's clear that we should be aiming for less hierarchy and more diversity.
We choose our behavior. We need to be more curious, inquiring, experimental and nurturing. We need to stop being hierarchical, directive, controlling, and conforming. It is not just the presence of the positive behaviors in the Generative quadrant that count, it is the corresponding absence of the negative behaviors.
When you're in a leadership position, you have a massive impact on the cognitive diversity of your team (through hiring decisions) and its psychological safety (by the way you model behaviours).
How people choose to behave determines the quality of interaction and the emergent culture. Leaders need to consider not only how they will act, but as importantly, how they will not act. They need to disturb and disrupt unhelpful patterns of behavior and commit to establishing new routines. To lay the ground for successful execution everyone needs to strengthen and sustain psychological safety through continuous gestures and responses. People cannot express their cognitive difference if it is unsafe to do so. If leaders focus on enhancing the quality of interaction in their teams, business performance and wellbeing will follow.
Everyone, of course, will see themselves as being in the 'Generative' quadrant but perhaps the trick is to get feedback (perhaps anonymous) as to whether that's how other people see you.

Source: Harvard Business Review

No opinion (quote)

“It is in our power to have no opinion about a thing, and not to be disturbed in our soul; for things themselves have no natural power to form our judgements."

(Marcus Aurelius)

On 'academic innovation'

Rolin Moe is in a good position to talk on the topic of ‘academic innovation’. In fact, it’s literally in his job title: ‘Assistant professor and Director of the Institute for Academic Innovation at Seattle Pacific University".

Moe warns, however, that it’s not necessarily a great idea to create a new discipline out of academic innovation. Until fairly recently, being ‘innovative’ was a negative slur, something that could get you in some serious trouble if you were found guilty.

[T]he historical usage of innovation is not as a foundational platform but a superficial label; yet in 2018 the governing bodies of societal institutions wield “innovation” in setting forth policy, administration and funding. Innovation, a term we all know but do not have a conceptual framework for, is driving change and growth in education. As regularly used without context, innovation is positioned as the future out-of-the-box solution for the problems of the present.

This makes the term a conduit of power relationships despite many proponents of innovation serving as vocal advocates for diversity, equity and inclusion in higher education. Thinking about revenue shortfalls in a time of national economic prosperity, the extraction of arts and humanities programs at a time when industry demands critical thinking from graduates, and the positioning of online learning as a democratizing tool when research shows the greatest benefit is to populations of existing privilege, the solutions offered under the innovation mantle have at best affected symptoms, at worst perpetuated causes.

Words and terms, of course, change over time. But, as Moe points out, if we’re to update the definition of innovation, we need a common understanding of what it means.

Coalescing around a common understanding is vital for the growth of “academic innovation,” but the history of innovation makes this concept problematic. Some have argued that innovation binds together disciplines such as learning technologies, leadership and change, and industrial/organizational psychology.

However, this cohesion assumes a “shared language of inquiry,” which does not currently exist. Today’s shared language around innovation is emotive rather than procedural; we use innovation to highlight the desired positive results of our efforts rather than to identify anything specific about our effort (products, processes or policies). The predominant use of innovation is to highlight the value and future-readiness of whatever the speaker supports, which is why opposite sides of issues in education (see school choice, personalized learning, etc.) use innovation in promoting their ideologies.

It seems to me that the neoliberal agenda has invaded education, as it does with any uncommodified available space, and introduced the language of the market. So we get educators using the language of Silicon Valley and attempting to ‘disrupt’ their institution.

If the goal of academic innovation is to be creative and flexible in the development, discovery and engagement of knowledge about the future of education, the foundation for knowledge accumulation and development needs to be innovative in and of itself. That must start with an operational definition of academic innovation, differentiating what innovation means to education from what it means to entrepreneurial spaces or sociological efforts.

That definition must address the negotiated history of the term, from the earliest application of the concept in government-funded research spurred by education policy during the 1960s, through overlooked innovation authors like Freeman and Thorstein Veblen. Negotiating the future we want with the history we have is vital in order to determine the best structure to support the development of an inventive network for creating research-backed, criticism-engaged and outside-the-box approaches to the future of education. The energy behind what we today call academic innovation needs to be put toward problematizing and unraveling the causes of the obstacles facing the practice of educating people of competence and character, rather than focusing on the promotion of near-future technologies and their effect on symptomatic issues.

While I’m sympathetic to the idea that educational institutions can be ‘stodgy’ places that can often need a good kick up the behind, I’m not entirely sure that academic innovation as a discipline will do anything other than legitimise the capitalist takeover of a public good.

Source: Inside Higher Ed (via Aaron Davis)

Criticism (quote)

“To learn who rules over you, simply find out who you are not allowed to criticize."

(Voltaire)

Protocols for the free web

If there’s one thing I’ve learned in my time at the intersection of education and technology, it’s that nobody cares about the important stuff, but people will go crazy if you make a small tweak to an emoji icon. 🙄

The reason you can use any web browser you want to access this website is down to standards. These are collections of protocols that define expected behaviours when you use a web browser to read what I’ve written. There are organisations and working groups ensuring that the internet doesn’t devolve into the Wild West.

This post on the We Distribute blog is an interview with Mike Macgirvin who has spent much of his adult life working on the protocols that enable social interaction on the web to happen. It’s an important read, even for less-than-technical people, as it serves to explain some of the very human decisions that shape the technology that mediates our lives.

There’s nothing magic about a protocol. It’s basically just a gentleman’s agreement about how to implement something. There are a number of levels or grades of protocols from simple in-house conventions all the way to internet specifications. The higher quality protocols have some interesting characteristics. Most importantly, these are intended as actual technical blueprints so that if two independent developers in isolated labs follow the specifications accurately, their implementations should interact together perfectly. This is an important concept.

The level of specification needed to produce this higher quality protocol is a double-edged sword. If you specify things too rigidly, projects using this protocol cannot grow or extend beyond the limits and restrictions you have specified. If you do not specify the implementation rules tightly enough, you will end up with competing products or projects that can both claim to implement the specification, yet are unable to interoperate at a basic level.

For-profit companies, and in particular those who are backed by venture capitalists, are very fond of what’s known as vendor lock-in. While there are moves afoot seeking to limit this, including those provided by GDPR, it’s a game of cat-and-mouse.

The free web, on the other hand, is different. It’s a place where, instead of being beholden to people trying to commodify and intermediate your interactions with other human beings, there is the free exchange of data and ideas.

Unfortunately, as Macgirvin points out, its much easier to enclose something than to ‘lock it open’:

In 2010–2012, the free web lost *hundreds of thousands* of early adopters because we had no way to easily migrate from server to server; and lots of early server administrators closed down with little or no warning. This set the free web back at least five years, because you couldn’t trust your account and identity and friendships and content to exist tomorrow. Most of the other free web projects decided that this problem should be solved by import/export tools (which we’re still waiting for in some cases).

I saw an even bigger problem. Twitter at the time was over capacity and often would be shut down for hours or a few days. What if you didn’t really want to permanently move to another server, but you just wanted to post something and stay in touch with friends/family when your server was having a bad day? This was the impetus for nomadic identity. You could take a thumbdrive and load it into any other server; and your identity is intact and you still have all your friends. Then we allowed you to “clone” your identity so you could have these backup accounts available at any time you needed them. Then we started syncing stuff between your clones so that on server ‘A’ you still have the same exact content and friends that you do on server ‘B’. They’re clones. You can post from either. If one shuts down forever, no big deal. If it has a cert issue that takes 24 hours to fix, no big deal. Your online life can continue, uninterrupted — no matter what happens to individual servers.

The trouble, of course, with all of this, is that things aren’t important until they are. So if you’re using Twitter to share photos of what you had for breakfast or status updates about the facial expressions of your cat, you’re not so bothered if the service experiences some downtime. Fast forward a couple of years and emergency services are using it to reassure the citizenry in the face of impending doom.

Those out to make a profit from commodifying social interaction are like those on the political right; they’re more likely to rally behind one another in the name of capital. The left, in this case represented by the free web, is prone to internecine conflict due to their motivation being more ideological than financial.

The way I look at it is that the free web is like family. Everybody has a dysfunctional family. You have black sheep and relatives you really just want to strangle sometimes. Thanksgiving dinner always turns into a shitfight. They’re all fundamentalist Christians and you’re more Zen Buddhist. You can’t carry on a conversation without arguing about who has the more successful career or chastising cousin Harry for his drug use.

But when you get right down to it — none of this matters. They’re family. We’re all in this together. That’s how it is with the free web, even if some projects like to think that they are the only ones that matter. Everybody matters. Each of our projects brings a unique value proposition to the table, and provides a different set of solutions and decentralised services. You can’t ignore any of them or leave any of them behind. We’re one family and we’re all busy creating something incredible. If you look at only one member of this family, you might be disappointed in the range of services that are being offered. You’re probably missing out completely on what the rest of the family is doing. Together we’re all creating a new and improved social web. There are some awesome projects tackling completely different aspects of decentralisation and offering completely different services. If we could all work together we could probably conquer the world — though that’s unlikely to happen any time soon. The first step is just to all sit down at Thanksgiving dinner without killing each other.

We get to choose the technologies we use in our lives. And those decisions matter. Decentralisation is important, particularly in regards to the social web, because no government or organisation should be given the power to mediate our interactions.

Source: We Distribute

Encumbered by civilization (quote)

“To ramble across the countryside is to disembarrass oneself of the social and mental constraints with which one is encumbered by civilization."

(Matthew Beaumont, Nightwalking, p.231)

Paywalls and Patreon

I was part of the discussion that led to this post about Medium’s paywall. Richard Bartlett, whose work with Enspiral, Loomio, and decentralised organising I have huge respect for, has been experimenting with different options to support his work:

Last year I wrote about my dilemma: I have an ethical commitment to the commons, and I want to make a living from my writing. I want to publish all my creative work for free, and I am at my most creative when I have a reliable income. In that story I shared my long history of writing on the web, and my desire to free up time for more ambitious writing projects. Since then I have made a bunch of experiments with different ways of making money from my writing, including Patreon, the Medium Partner Program and LeanPub.

Patreon, which I've started to use for Thought Shrapnel, seems to be working out well for Bartlett, however:

To earn a full salary from Patreon, I would need many more supporters, requiring a marketing effort that starts to feel like begging. The gift economy is lovely in theory, especially because there’s no coercion: contributions are voluntary, and there is no punishment for readers who choose to not contribute. But when I interrogate these dynamics at a deeper level, I’m less satisifed.

In my point of view, social capital is subject to the same accumulative and alienating dynamics as financial capital. It’s even more dangerous in some senses, as the transactions are impossible to track, so it is much harder to redistribute accumulations of wealth.

Personally I redistribute 10% of my income to other Patreon creators who I think are doing more important and less fundable work than me: street poet David Merritt and anarchist authors William Gillis and Emmi Bevensee. At least this is a gesture to remind myself that the social capitalist is no more woke than the financial capitalist.

Frankly, as a producer, the clean transaction of buyer and seller just feels better to me. It feels good to produce something of value and have that value acknowledged by somebody purchasing it.

It's a post worth reading in its entirety, and I don't want to include any more than three quotations here. Suffice to say that Bartlett has found Medium's paywall approach useful for discovery but actually find Leanpub the best option:

So, the trickle of income from Patreon feels nice, but I don’t want to self-promote more than I already am. Medium’s paywall is a promising income stream, but I risk losing the audience I care most about. So far it feels like publishing on LeanPub hits the sweet spot between revenue and ethics. So I’m considering that my next experiment could be to package up my existing blog posts into a kind of “best of” ebook that people can buy if they want to support my writing.

I'd suggest that a 'paywall' is always going to be problematic. The reason I allow people to support my work is that some people just have more spare money than other people (for whatever reason) and/or some people like supporting things they value financially.

At the moment, I release microcasts as a supporter-only perk. However, given that Patreon allows ‘early access’ another approach would be to set everything on a delay. I’m still, like Bartlett, weighing up all of this, but for now Patreon seems like a great option.

Source: Richard D. Bartlett

Good, hard work (quote)

“Games make us happy because they are hard work that we choose for ourselves, and it turns out that almost nothing makes us happier than good, hard work.” (Jane McGonigal)

Issue #305: Sprinting into the distance

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Wielding your pension fund for good

Some wise words in this article in The Guardian from Aditya Chakrabortty. Perhaps it’s my age, but I’m increasingly aware of the power that we have, collectively, around where and how we spend and save our money.

In big French companies, pension savers are offered the chance to invest 10% of their money in a fond solidaire, or solidarity fund, which supports unlisted social enterprises. In Britain, your average pension member doesn’t even get consulted on what values they’d like their money to support – whether fighting climate change or building social housing. Yet, rather than tackle those issues, the Labour party seeks to build a parallel finance system, in the form of a National Investment Bank, while other left economists talk about building a sovereign wealth fund, just as Norway has done with the proceeds of North Sea oil.

But we have a sovereign wealth fund already. It’s worth over £2tn and it’s called our pension funds. The big battle is to give us agency over our own savings, rather than leaving it all to some pinstriped manager on a fat commission.

I have several pensions (Teachers' Pension, Local Government, personal, Moodle…) and, as much as I’m able, I ensure that the money is being ethically invested. There’s so many frontiers on which we can change the world, not all of them are super-exciting…

Source: The Guardian

 

First tea, then revolution

I’m working with Outlandish this week, as part of a MoodleNet design sprint. One of their co-founders, Harry Robbins, is quoted in the latest issue of WIRED about the CoTech network of which Outlandish (and We Are Open), are part.

CoTech is just one example of how cooperatively-owned tech businesses look poised to proliferate in the UK. Their network boasts 32 member-businesses across the country. They’re boosted, too, by the recent launch of startup accelerator Unfound, the UK’s first accelerator for tech co-ops, which announced its first successful candidates last week. If they succeed, they will be following the lead of countries like Spain and Italy, where cooperative enterprise has flourished for decades. Their proponents see business structures as driving radical change: getting the fruits of innovation shared more fairly and providing better social responsibility. Funding troubles have often stunted co-ops’ growth though - but, with tentative links to blockchain technology and a newfound spirit of collaboration, that’s something that could now change.

It takes a while to get collaboration between different organisations off the ground, and CoTech has been no different. I really enjoyed the CoTech gathering at Wortley Hall (a worker-owned stately home) last year, but we've more work to do.

CoTech's 32 member-businesses have around 300 workers between them, with trades that range from web development to broadband infrastructure and augmented reality. The three biggest, among them Outlandish, boast turnovers of between £1 and £2 million. They’re yet to implement the equal pay suggested at their first meet-up, but they have made progress in efforts at collaboration. They now hold inter-coop training, monthly meet-ups to hold discussions and share skills, and run internal crowdfunding using the Cobudget tool (developed by New Zealand social enterprise network Enspiral).

It's only when you set up a co-op or something other than a straight-up limited company that you see the default 'operating system' of 21st society: capitalism. And not just warm fuzzy capitalism, but rapacious, neoliberal capitalism that sets out to deprive normal, everyday people of money, rights, and dignity.

Robbins argues that being a co-op creates a different set of incentives: with no shareholders demanding dividends, generating profit isn't the primary goal. And with it not being a quick or easy way to get rich, they’re more likely to be founded with a purpose that’s socially- or ethically-minded.

He sees big openings for CoTech to grow in both their member businesses and their respective staff – and thinks a lot of the UK’s small businesses are already effectively operating as co-ops. In an overheated market for developers, he believes that a big proportion of them want to work for companies that are socially responsible, but don’t want to do the repetitive web maintenance on offer at many charities.

It's great to see CoTech continue to get mainstream press. Interestingly, and as you can see from the photo of the Rochdale pioneers that accompany both this post and the WIRED article, traditional co-ops weren't necessarily any more diverse than their mainstream counterparts. That's something that modern co-ops are actually really quite good at: diversity and democratic processes.

Source: WIRED

Sensible people

“We find very few sensible people except those who agree with our own opinion.” (François de La Rochefoucauld)

Useful mental models

While there’s nothing worse than a pedantic philosopher (I’m looking at you Socrates) it’s definitely worth remembering that, as human beings, we’re subject to biases.

This long list of mental models from Farnam Street is worth going through. I particularly like Hanlon’s Razor:

Hard to trace in its origin, Hanlon's Razor states that we should not attribute to malice that which is more easily explained by stupidity. In a complex world, using this model helps us avoid paranoia and ideology. By not generally assuming that bad results are the fault of a bad actor, we look for options instead of missing opportunities. This model reminds us that people do make mistakes. It demands that we ask if there is another reasonable explanation for the events that have occurred. The explanation most likely to be right is the one that contains the least amount of intent.
Another that's come in handy is the Fundamental Attribution Error:
We tend to over-ascribe the behavior of others to their innate traits rather than to situational factors, leading us to overestimate how consistent that behavior will be in the future. In such a situation, predicting behavior seems not very difficult. Of course, in practice this assumption is consistently demonstrated to be wrong, and we are consequently surprised when others do not act in accordance with the “innate” traits we’ve endowed them with.
A list to return to time and again.

Source: Farnam Street

Nobody is ready for GDPR

As a small business owner and co-op founder, GDPR applies to me as much as everyone else. It’s a massive ballache, but I support the philosophy behind what it’s trying to achieve.

After four years of deliberation, the General Data Protection Regulation (GDPR) was officially adopted by the European Union in 2016. The regulation gave companies a two-year runway to get compliant, which is theoretically plenty of time to get shipshape. The reality is messier. Like term papers and tax returns, there are people who get it done early, and then there’s the rest of us.

I'm definitely in "the rest of us" camp, meaning that, over the last week or so, my wife and I have spent time figuring stuff out. The main thing is getting things in order so that  you've got a process in place. Different things are going to affect different organisations, well, differently.

But perhaps the GDPR requirement that has everyone tearing their hair out the most is the data subject access request. EU residents have the right to request access to review personal information gathered by companies. Those users — called “data subjects” in GDPR parlance — can ask for their information to be deleted, to be corrected if it’s incorrect, and even get delivered to them in a portable form. But that data might be on five different servers and in god knows how many formats. (This is assuming the company even knows that the data exists in the first place.) A big part of becoming GDPR compliant is setting up internal infrastructures so that these requests can be responded to.

A data subject access request isn't going to affect our size of business very much. If someone does make a request, we've got a list of places from which to manually export the data. That's obviously not a viable option for larger enterprises, who need to automate.

To be fair, GDPR as a whole is a bit complicated. Alison Cool, a professor of anthropology and information science at the University of Colorado, Boulder, writes in The New York Times that the law is “staggeringly complex” and practically incomprehensible to the people who are trying to comply with it. Scientists and data managers she spoke to “doubted that absolute compliance was even possible.”

To my mind, GDPR is like an much more far-reaching version of the Freedom of Information Act that came into force in the year 2000. That changed the nature of what citizens could expect from public bodies. I hope that the GDPR similarly changes what we all can expect from organisations who process our personal data.

Source: The Verge

Measuring ability and greatness

“Ability and greatness must be measured by virtue, not by good fortune.” (Baltasar Gracián)

Estonia goes for free public transport

Estonia is pretty much already the home of free public wifi, so this is a logical next step. The council of the capital city, Tallinn, provided free public transport to citizens for the last five years after a referdendum. Now the idea is to extend that to everyone — including tourists.

This article mainly comprises of an interview with Allan Alaküla, the Head of Tallinn European Union Office. He makes a couple of important points:

 A good thing is, of course, that it mostly appeals to people with lower to medium incomes. But free public transport also stimulates the mobility of higher-income groups. They are simply going out more often for entertainment, to restaurants, bars and cinemas. Therefore they consume local goods and services and are likely to spend more money, more often. In the end this makes local businesses thrive. It breathes new life into the city.
In other words, allowing people to move around the city without thinking about the cost encourages people to do so. This has economic and social benefits.
Before introducing free public transport, the city center was crammed with cars. This situation has improved — also because we raised parking fees. When non-Tallinners leave their cars in a park-and-ride and check in to public transport on the same day, they [not] only use public transport for free, but also won’t be charged the parking fee. We noticed that people didn’t complain about high parking fees once we offered them a good alternative.
This is great, joined-up thinking: make it really easy for visitors to the city to do the right thing. Estonia really is at the forefront of citizen and pro-social innovation, as anyone familiar with their e-Residency scheme will be aware.

Source: Pop-Up City

The toughest smartphones on the market

I found this interesting:

To help you avoid finding out the horrifying truth when your phone goes clattering to the ground, we tested all of the major smartphones by dropping them over the course of four rounds from 4 feet and 6 feet onto wood and concrete — and even into a toilet — to see which handset is the toughest.
The results?
While the result wasn't completely unexpected — after all, the phone has a ShatterShield display, which the company guarantees against cracks — the Moto Z2 Force survived drops from 6 feet onto concrete, with barely a scratch.

Apple’s least-expensive phone didn’t prove very tough at all. In fact, the $399 iPhone SE was rendered unusable before all of the others. However, this was not a big surprise, as the newer iPhone 8 and iPhone X are made with much stronger glass than the iPhone SE’s from 2016.

Summary:

  • Motorola Moto Z2 Force - Toughness score: 8.5/10
  • LG X Venture - Toughness score: 6.6/10
  • Apple iPhone X - Toughness score: 6.2/10
  • LG V30 - Toughness score: 6/10
  • Samsung Galaxy S9 - Toughness score: 6/10
  • Motorola Moto G5 Plus - Toughness score: 5.1/10
  • Apple iPhone 8 - Toughness score: 4.9/10
  • Samsung Galaxy Note 8 - Toughness score: 4.3/10
  • OnePlus 5T - Toughness score: 4.3/10
  • Huawei Mate 10 Pro - Toughness score: 4.3/10
  • Google Pixel 2 XL - Toughness score: 4.3/10
  • iPhone SE - Toughness score: 3.9/10
Source: Tom's Guide

The increase in worker-owned co-ops

This article by Eillie Anzilotti is a Fast Company ‘long read’. It’s US-focused and includes specific examples and case studies, but is, I think, more widely-applicable.

Anzilotti explains some of the benefits of worker-owned co-ops, which are increasing in number as the ‘baby boomer’ generation retires.

Because the people doing the work for the company are also the ones who own the company, they feel a greater sense of responsibility for and personal stake in helping the business succeed. While there’s still a lot of knowledge-sharing that needs to happen before co-ops go mainstream, recently, policymakers are taking notice of the benefits of worker cooperatives, and new legislation is on the way support their growth. And with millions of baby boomer-owned businesses set to change hands in the upcoming decades, this transition could be an opportunity to create more democratic workplaces across the country–if business owners, workers, and advocates can work together to convert these enterprises into employee-owned cooperatives.
Hilariously, Anzilotti calls the retirement of the boomer generation a 'silver tsunami' which, more seriously, provides a huge opportunity to wrest back control from organisations that exist for the benefit of the few.
But instead of selling to a private owner, there’s a real opportunity amid this “silver tsunami” to radically scale the presence of worker-owned cooperatives in the U.S. “Historically, co-ops do best when there’s a market failure,” says Melissa Hoover, founding executive director of DAWI. During the Great Depression, for instance, farmers struggling to access energy resources, set up electrical cooperatives that they collectively owned, and cooperative housing models took off in some cities. Nearly a century later, we’re living through our own version of market failure. As banks have consolidated, capital for small businesses has grown scarce. More small businesses are now closing than opening in the U.S., and jobs are consistently failing to provide livable wages to employees.
Small businesses are vital in the economy, but to really make a change, we need larger, stronger businesses. Worker-owned co-ops can do that.
Employee-owned cooperatives... create a stronger base from which a business can continue to exist, and even grow. The workers already have demonstrated their commitment to the company and the community in which it operates, and granting them ownership allows the business to continue to operate and the community to continue to reap the benefits. And because the sales are done in a way that’s transparent and mutually beneficial, the selling business owners also get a fairer shake.
The difficulty, as Anzilotti notes, is that talking about democratic control of the organisation for which you work isn't necessarily the most scintillating topic of conversation.
“Co-ops are not whiz-bang businesses that are going to get anybody rich,” Hoover says. “They’re bread and butter types–necessary and profitable, but not sexy.” Still, communities and policymakers alike are recognizing that their shared ownership structure can provide the kind of stability that the market cannot. “We’ve seen growing interest in rapidly changing cities and in rural areas where they’re really trying to make capital investments that anchor community wealth,” Hoover says. “Business retention makes more sense than trying to attract Amazon HQ2,” she adds. “Why don’t we invest in our local ecosystem and retain what’s already here?”
I have to say that the process of setting up We Are Open Co-op has been one of the most eye-opening experiences of my life. I'd highly recommend looking into the co-operatives for your organisation, whether extant or nascent.

Source: Fast Company  

The increase in worker-owned co-ops

This article by Eillie Anzilotti is a Fast Company ‘long read’. It’s US-focused and includes specific examples and case studies, but is, I think, more widely-applicable.

Anzilotti explains some of the benefits of worker-owned co-ops, which are increasing in number as the ‘baby boomer’ generation retires.

Because the people doing the work for the company are also the ones who own the company, they feel a greater sense of responsibility for and personal stake in helping the business succeed. While there’s still a lot of knowledge-sharing that needs to happen before co-ops go mainstream, recently, policymakers are taking notice of the benefits of worker cooperatives, and new legislation is on the way support their growth. And with millions of baby boomer-owned businesses set to change hands in the upcoming decades, this transition could be an opportunity to create more democratic workplaces across the country–if business owners, workers, and advocates can work together to convert these enterprises into employee-owned cooperatives.
Hilariously, Anzilotti calls the retirement of the boomer generation a 'silver tsunami' which, more seriously, provides a huge opportunity to wrest back control from organisations that exist for the benefit of the few.
But instead of selling to a private owner, there’s a real opportunity amid this “silver tsunami” to radically scale the presence of worker-owned cooperatives in the U.S. “Historically, co-ops do best when there’s a market failure,” says Melissa Hoover, founding executive director of DAWI. During the Great Depression, for instance, farmers struggling to access energy resources, set up electrical cooperatives that they collectively owned, and cooperative housing models took off in some cities. Nearly a century later, we’re living through our own version of market failure. As banks have consolidated, capital for small businesses has grown scarce. More small businesses are now closing than opening in the U.S., and jobs are consistently failing to provide livable wages to employees.
Small businesses are vital in the economy, but to really make a change, we need larger, stronger businesses. Worker-owned co-ops can do that.
Employee-owned cooperatives... create a stronger base from which a business can continue to exist, and even grow. The workers already have demonstrated their commitment to the company and the community in which it operates, and granting them ownership allows the business to continue to operate and the community to continue to reap the benefits. And because the sales are done in a way that’s transparent and mutually beneficial, the selling business owners also get a fairer shake.
The difficulty, as Anzilotti notes, is that talking about democratic control of the organisation for which you work isn't necessarily the most scintillating topic of conversation.
“Co-ops are not whiz-bang businesses that are going to get anybody rich,” Hoover says. “They’re bread and butter types–necessary and profitable, but not sexy.” Still, communities and policymakers alike are recognizing that their shared ownership structure can provide the kind of stability that the market cannot. “We’ve seen growing interest in rapidly changing cities and in rural areas where they’re really trying to make capital investments that anchor community wealth,” Hoover says. “Business retention makes more sense than trying to attract Amazon HQ2,” she adds. “Why don’t we invest in our local ecosystem and retain what’s already here?”
I have to say that the process of setting up We Are Open Co-op has been one of the most eye-opening experiences of my life. I'd highly recommend looking into the co-operatives for your organisation, whether extant or nascent.

Source: Fast Company  

Issue #304: Grateful Dead Public Radio

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

The New Octopus: going beyond managerial interventions for internet giants

This article in Logic magazine was brought to my attention by a recent issue of Ian O’Byrne’s excellent TL;DR newsletter. It’s a long read, focusing on the structural power of internet giants such as Amazon, Facebook, and Google.

The author, K. Sabeel Rahman, is an assistant professor of law at Brooklyn Law School and a fellow at the Roosevelt Institute. He uses historical analogues to make his points, while noting how different the current state of affairs is from a century ago.

As in the Progressive Era, technological revolutions have radically transformed our social, economic, and political life. Technology platforms, big data, AI—these are the modern infrastructures for today’s economy. And yet the question of what to do about technology is fraught, for these technological systems paradoxically evoke both bigness and diffusion: firms like Amazon and Alphabet and Apple are dominant, yet the internet and big data and AI are technologies that are by their very nature diffuse.

The problem, however, is not bigness per se. Even for Brandeisians, the central concern was power: the ability to arbitrarily influence the decisions and opportunities available to others. Such unchecked power represented a threat to liberty. Therefore, just as the power of the state had to be tamed through institutional checks and balances, so too did this private power have to be contested—controlled, held to account.

This emphasis on power and contestation, rather than literal bigness, helps clarify the ways in which technology’s particular relationship to scale poses a challenge to ideals of democracy, liberty, equality—and what to do about it.

I think this is the thing that concerns me most. Just as the banks were ‘too big to fail’ during the economic crisis and had to be bailed out by the taxpayer, so huge technology companies are increasingly playing that kind of role elsewhere in our society.

The problem of scale, then, has always been a problem of power and contestability. In both our political and our economic life, arbitrary power is a threat to liberty. The remedy is the institutionalization of checks and balances. But where political checks and balances take a common set of forms—elections, the separation of powers—checks and balances for private corporate power have proven trickier to implement.

These various mechanisms—regulatory oversight, antitrust laws, corporate governance, and the countervailing power of organized labor— together helped create a relatively tame, and economically dynamic, twentieth-century economy. But today, as technology creates new kinds of power and new kinds of scale, new variations on these strategies may be needed.

“Arbitrary power is a threat to liberty.” Absolutely, no matter whether the company holding that power has been problematic in the past, has a slogan promising not to do anything wrong, or is well-liked by the public.

We need more than regulatory oversight of such organisations because of how insidious their power can be — much like the image of Luks' octopus that accompanies this and the original post.

Rahman explains three types of power held by large internet companies:

First, there is transmission power. This is the ability of a firm to control the flow of data or goods. Take Amazon: as a shipping and logistics infrastructure, it can be seen as directly analogous to the railroads of the nineteenth century, which enjoyed monopolized mastery over the circulation of people, information, and commodities. Amazon provides the literal conduits for commerce.

[…]

A second type of power arises from what we might think of as a gatekeeping power. Here, the issue is not necessarily that the firm controls the entire infrastructure of transmission, but rather that the firm controls the gateway to an otherwise decentralized and diffuse landscape.

This is one way to understand the Facebook News Feed, or Google Search. Google Search does not literally own and control the entire internet. But it is increasingly true that for most users, access to the internet is mediated through the gateway of Google Search or YouTube’s suggested videos. By controlling the point of entry, Google exercises outsized influence on the kinds of information and commerce that users can ultimately access—a form of control without complete ownership.

[…]

A third kind of power is scoring power, exercised by ratings systems, indices, and ranking databases. Increasingly, many business and public policy decisions are based on big data-enabled scoring systems. Thus employers will screen potential applicants for the likelihood that they may quit, be a problematic employee, or participate in criminal activity. Or judges will use predictive risk assessments to inform sentencing and bail decisions.

These scoring systems may seem objective and neutral, but they are built on data and analytics that bake into them existing patterns of racial, gender, and economic bias.

[…]

Each of these forms of power is infrastructural. Their impact grows as more and more goods and services are built atop a particular platform. They are also more subtle than explicit control: each of these types of power enable a firm to exercise tremendous influence over what might otherwise look like a decentralized and diffused system.

As I quote Adam Greenfield as saying in Microcast #021 (supporters only!) this infrastructural power is less obvious because of the immateriality of the world controlled by internet giants. We need more than managerial approaches to solving the problems faced by their power.

A more radical response, then, would be to impose structural restraints: limits on the structure of technology firms, their powers, and their business models, to forestall the dynamics that lead to the most troubling forms of infrastructural power in the first place.

One solution would be to convert some of these infrastructures into “public options”—publicly managed alternatives to private provision. Run by the state, these public versions could operate on equitable, inclusive, and nondiscriminatory principles. Public provision of these infrastructures would subject them to legal requirements for equal service and due process. Furthermore, supplying a public option would put competitive pressures on private providers.

[…]

We can also introduce structural limits on technologies with the goal of precluding dangerous concentrations of power. While much of the debate over big data and privacy has tended to emphasize the concerns of individuals, we might view a robust privacy regime as a kind of structural limit: if firms are precluded from collecting or using certain types of data, that limits the kinds of power they can exercise.

Some of this is already happening, thankfully, through structural limitations such as GDPR. I hope this is the first step in a more coordinated response to internet giants who increasingly have more impact on the day-to-day lives of citizens than their governments.

Moving fast and breaking things is inevitable in moments of change. The issue is which things we are willing to break—and how broken we are willing to let them become. Moving fast may not be worth it if it means breaking the things upon which democracy depends.
It's a difficult balance. However, just as GDPR has put in place mechanisms to prevent the over-reaching of governments and of companies, I think we could think differently about perhaps organisations with non-profit status and community ownership that could provide some of the infrastructure being built by shareholder-owned organisations.

Having just finished reading Utopia for Realists, I definitely think the left needs to think bigger than it’s currently doing, and really push that Overton window.

Source: Logic magazine (via Ian O’Byrne)

The New Octopus: going beyond managerial interventions for internet giants

This article in Logic magazine was brought to my attention by a recent issue of Ian O’Byrne’s excellent TL;DR newsletter. It’s a long read, focusing on the structural power of internet giants such as Amazon, Facebook, and Google.

The author, K. Sabeel Rahman, is an assistant professor of law at Brooklyn Law School and a fellow at the Roosevelt Institute. He uses historical analogues to make his points, while noting how different the current state of affairs is from a century ago.

As in the Progressive Era, technological revolutions have radically transformed our social, economic, and political life. Technology platforms, big data, AI—these are the modern infrastructures for today’s economy. And yet the question of what to do about technology is fraught, for these technological systems paradoxically evoke both bigness and diffusion: firms like Amazon and Alphabet and Apple are dominant, yet the internet and big data and AI are technologies that are by their very nature diffuse.

The problem, however, is not bigness per se. Even for Brandeisians, the central concern was power: the ability to arbitrarily influence the decisions and opportunities available to others. Such unchecked power represented a threat to liberty. Therefore, just as the power of the state had to be tamed through institutional checks and balances, so too did this private power have to be contested—controlled, held to account.

This emphasis on power and contestation, rather than literal bigness, helps clarify the ways in which technology’s particular relationship to scale poses a challenge to ideals of democracy, liberty, equality—and what to do about it.

I think this is the thing that concerns me most. Just as the banks were ‘too big to fail’ during the economic crisis and had to be bailed out by the taxpayer, so huge technology companies are increasingly playing that kind of role elsewhere in our society.

The problem of scale, then, has always been a problem of power and contestability. In both our political and our economic life, arbitrary power is a threat to liberty. The remedy is the institutionalization of checks and balances. But where political checks and balances take a common set of forms—elections, the separation of powers—checks and balances for private corporate power have proven trickier to implement.

These various mechanisms—regulatory oversight, antitrust laws, corporate governance, and the countervailing power of organized labor— together helped create a relatively tame, and economically dynamic, twentieth-century economy. But today, as technology creates new kinds of power and new kinds of scale, new variations on these strategies may be needed.

“Arbitrary power is a threat to liberty.” Absolutely, no matter whether the company holding that power has been problematic in the past, has a slogan promising not to do anything wrong, or is well-liked by the public.

We need more than regulatory oversight of such organisations because of how insidious their power can be — much like the image of Luks' octopus that accompanies this and the original post.

Rahman explains three types of power held by large internet companies:

First, there is transmission power. This is the ability of a firm to control the flow of data or goods. Take Amazon: as a shipping and logistics infrastructure, it can be seen as directly analogous to the railroads of the nineteenth century, which enjoyed monopolized mastery over the circulation of people, information, and commodities. Amazon provides the literal conduits for commerce.

[…]

A second type of power arises from what we might think of as a gatekeeping power. Here, the issue is not necessarily that the firm controls the entire infrastructure of transmission, but rather that the firm controls the gateway to an otherwise decentralized and diffuse landscape.

This is one way to understand the Facebook News Feed, or Google Search. Google Search does not literally own and control the entire internet. But it is increasingly true that for most users, access to the internet is mediated through the gateway of Google Search or YouTube’s suggested videos. By controlling the point of entry, Google exercises outsized influence on the kinds of information and commerce that users can ultimately access—a form of control without complete ownership.

[…]

A third kind of power is scoring power, exercised by ratings systems, indices, and ranking databases. Increasingly, many business and public policy decisions are based on big data-enabled scoring systems. Thus employers will screen potential applicants for the likelihood that they may quit, be a problematic employee, or participate in criminal activity. Or judges will use predictive risk assessments to inform sentencing and bail decisions.

These scoring systems may seem objective and neutral, but they are built on data and analytics that bake into them existing patterns of racial, gender, and economic bias.

[…]

Each of these forms of power is infrastructural. Their impact grows as more and more goods and services are built atop a particular platform. They are also more subtle than explicit control: each of these types of power enable a firm to exercise tremendous influence over what might otherwise look like a decentralized and diffused system.

As I quote Adam Greenfield as saying in Microcast #021 (supporters only!) this infrastructural power is less obvious because of the immateriality of the world controlled by internet giants. We need more than managerial approaches to solving the problems faced by their power.

A more radical response, then, would be to impose structural restraints: limits on the structure of technology firms, their powers, and their business models, to forestall the dynamics that lead to the most troubling forms of infrastructural power in the first place.

One solution would be to convert some of these infrastructures into “public options”—publicly managed alternatives to private provision. Run by the state, these public versions could operate on equitable, inclusive, and nondiscriminatory principles. Public provision of these infrastructures would subject them to legal requirements for equal service and due process. Furthermore, supplying a public option would put competitive pressures on private providers.

[…]

We can also introduce structural limits on technologies with the goal of precluding dangerous concentrations of power. While much of the debate over big data and privacy has tended to emphasize the concerns of individuals, we might view a robust privacy regime as a kind of structural limit: if firms are precluded from collecting or using certain types of data, that limits the kinds of power they can exercise.

Some of this is already happening, thankfully, through structural limitations such as GDPR. I hope this is the first step in a more coordinated response to internet giants who increasingly have more impact on the day-to-day lives of citizens than their governments.

Moving fast and breaking things is inevitable in moments of change. The issue is which things we are willing to break—and how broken we are willing to let them become. Moving fast may not be worth it if it means breaking the things upon which democracy depends.
It's a difficult balance. However, just as GDPR has put in place mechanisms to prevent the over-reaching of governments and of companies, I think we could think differently about perhaps organisations with non-profit status and community ownership that could provide some of the infrastructure being built by shareholder-owned organisations.

Having just finished reading Utopia for Realists, I definitely think the left needs to think bigger than it’s currently doing, and really push that Overton window.

Source: Logic magazine (via Ian O’Byrne)

Schedule your priorities

“The key is not to prioritize what’s on your schedule, but to schedule your priorities.”

(Stephen Covey)

Owners need to invest in employees to have them feel invested in their work

Jim Whitehurst, CEO of Red Hat, writes:

As the nature of work changes, the factors keeping people invested in and motivated by that work are changing, too. What's clear is that our conventional strategies for cultivating engagement may no longer work. We need to rethink our approach.
I think it's great that forward-thinking organisations are trying to find ways to make work more fulfilling, and be part of a more holistic approach to life.
Current research suggests that extrinsic rewards (like bonuses or promotions) are great at motivating people to perform routine tasks—but are actually counterproductive when we use them to motivate creative problem-solving or innovation. That means that the value of intrinsic motivation is rising, which is why cultivating employee engagement is such an important topic right now.

Don’t get me wrong: I’m not suggesting that people no longer want to be paid for their work. But a paycheck alone is no longer enough to maintain engagement. As work becomes more difficult to specify and observe, managers have to ensure excellent performance via methods other than prescription, observation, and inspection. Micromanaging complex work is impossible.

Whitehurst suggests that there are three things organisations can do. I’d support all of these:

  1. Connect to a mission and purpose
  2. Reconsider your view of failure
  3. Cultivate a sense of ownership
However, what I think is startlingly missing from almost every vision from people 40+ is that they should be thinking about actual employee ownership — not just cultivating a 'sense' of it.

Don’t get me wrong, forming a co-op doesn’t automatically guarantee worker satisfaction, but it’s a whole lot more motivating when you know you’re not just working to make someone else rich.

Source: opensource.com

Owners need to invest in employees to have them feel invested in their work

Jim Whitehurst, CEO of Red Hat, writes:

As the nature of work changes, the factors keeping people invested in and motivated by that work are changing, too. What's clear is that our conventional strategies for cultivating engagement may no longer work. We need to rethink our approach.
I think it's great that forward-thinking organisations are trying to find ways to make work more fulfilling, and be part of a more holistic approach to life.
Current research suggests that extrinsic rewards (like bonuses or promotions) are great at motivating people to perform routine tasks—but are actually counterproductive when we use them to motivate creative problem-solving or innovation. That means that the value of intrinsic motivation is rising, which is why cultivating employee engagement is such an important topic right now.

Don’t get me wrong: I’m not suggesting that people no longer want to be paid for their work. But a paycheck alone is no longer enough to maintain engagement. As work becomes more difficult to specify and observe, managers have to ensure excellent performance via methods other than prescription, observation, and inspection. Micromanaging complex work is impossible.

Whitehurst suggests that there are three things organisations can do. I’d support all of these:

  1. Connect to a mission and purpose
  2. Reconsider your view of failure
  3. Cultivate a sense of ownership
However, what I think is startlingly missing from almost every vision from people 40+ is that they should be thinking about actual employee ownership — not just cultivating a 'sense' of it.

Don’t get me wrong, forming a co-op doesn’t automatically guarantee worker satisfaction, but it’s a whole lot more motivating when you know you’re not just working to make someone else rich.

Source: opensource.com

On blogging

Jim Groom nails it on blogging:

[M]ost folks treat their blog as if it were some kind of glossy headshot of their thinking, whereas the beauty and freedom of blogging was that it was by design a networked tool. Blogging provides a space to develop an online voice, connect with a particular network, and build a sense of identity online in conjunction with others working through a similar process. Scale in many ways became a distraction, one which was magnified to such a degree by the hype around MOOCs in edtech that anything less that 10s of thousands of “users,” “learners,” “participants,” followers,” etc. was tacitly considered somehow less than optimal for effective online learning. It was, and remains, a symptom of the capital-driven ethos of Silicon Valley that places all value on scale and numbers which is rooted in monetization—a reality that has infected edtech and helped to undermine the value and importance of forging an independent voice and intimate connections through what should be an independent media of expression. When scale is the endgame the whole process becomes bogged down in page views, followers, and likes rather than the freedom to explore and experiment with your ideas online. It’s a uniquely web-based version of Hell where the dominant form of communication online is a Medium think piece written by your friendly neighborhood thought leader.
You could accuse Thought Shrapnel of being glossy, but it's just a shiny version of what's in my head.

Source: bavatuesdays

On blogging

Jim Groom nails it on blogging:

[M]ost folks treat their blog as if it were some kind of glossy headshot of their thinking, whereas the beauty and freedom of blogging was that it was by design a networked tool. Blogging provides a space to develop an online voice, connect with a particular network, and build a sense of identity online in conjunction with others working through a similar process. Scale in many ways became a distraction, one which was magnified to such a degree by the hype around MOOCs in edtech that anything less that 10s of thousands of “users,” “learners,” “participants,” followers,” etc. was tacitly considered somehow less than optimal for effective online learning. It was, and remains, a symptom of the capital-driven ethos of Silicon Valley that places all value on scale and numbers which is rooted in monetization—a reality that has infected edtech and helped to undermine the value and importance of forging an independent voice and intimate connections through what should be an independent media of expression. When scale is the endgame the whole process becomes bogged down in page views, followers, and likes rather than the freedom to explore and experiment with your ideas online. It’s a uniquely web-based version of Hell where the dominant form of communication online is a Medium think piece written by your friendly neighborhood thought leader.
You could accuse Thought Shrapnel of being glossy, but it's just a shiny version of what's in my head.

Source: bavatuesdays

Peace of mind

“For every minute you remain angry, you give up sixty seconds of peace of mind.”

(Ralph Waldo Emerson)

The disappearing computer and the future of AI

I was at the Thinking Digital conference yesterday, which is always an inspiring event. It kicked off with a presentation from a representative of Amazon’s Alexa programme, who cited an article by Walt Mossberg from this time last year. I’m pretty sure I read about it, but didn’t necessarily write about it, at the time.

Mossberg talks about how computing will increasingly become invisible:

Let me start by revising the oft-quoted first line of my first Personal Technology column in the Journal on October 17th, 1991: “Personal computers are just too hard to use, and it’s not your fault.” It was true then, and for many, many years thereafter. Not only were the interfaces confusing, but most tech products demanded frequent tweaking and fixing of a type that required more technical skill than most people had, or cared to acquire. The whole field was new, and engineers weren’t designing products for normal people who had other talents and interests.

Things are different now, of course. We expect even small children to be able to use things like iPads with minimal help.

When the internet first arrived, it was a discrete activity you performed on a discrete hunk of metal and plastic called a PC, using a discrete software program called a browser. Even now, though the net is like the electrical grid, powering many things, you still use a discrete device — a smartphone, say — to access it. Sure, you can summon some internet smarts through an Echo, but there’s still a device there, and you still have to know the magic words to say. We are a long way from the invisible, omnipresent computer in Starship Enterprise.

The Amazon representative on-stage at the conference obviously believes that voice is the next frontier in computing. That's his job. Nevertheless, he marshalled some pretty compelling, if anecdotal, evidence for that. A couple of videos showed older people, who had been completely bypassed by the smartphone revolution, interacting naturally with Alexa.

I expect that one end result of all this work will be that the technology, the computer inside all these things, will fade into the background. In some cases, it may entirely disappear, waiting to be activated by a voice command, a person entering the room, a change in blood chemistry, a shift in temperature, a motion. Maybe even just a thought.

In the same way that the front end of a website like Facebook, the user interface, is the tip of the iceberg, so voice assistants are the front end for artificial intelligence. Who gets to the process data harvested by these devices, and for what purposes, is an important issue — both now and in the future.

And, if ambient technology is to become as integrated into our lives as previous technological revolutions like wood joists, steel beams, and engine blocks, we need to subject it to the digital equivalent of enforceable building codes and auto safety standards. Nothing less will do. And health? The current medical device standards will have to be even tougher, while still allowing for innovation.

This was the last article Mossberg wrote anywhere, having been a tech journalist since 1991. In signing off, he became a little wistful about the age of gadgetry we're leaving behind, but it's hopefully for the wider good.

We’ve all had a hell of a ride for the last few decades, no matter when you got on the roller coaster. It’s been exciting, enriching, and transformative. But it’s also been about objects and processes. Soon, after a brief slowdown, the roller coaster will be accelerating faster than ever, only this time it’ll be about actual experiences, with much less emphasis on the way those experiences get made.

This is an important touchstone article, and one I'll be returning to in future, no doubt.

Source: The Verge

Trust and the cult of your PLN

This is a long article with a philosophical take on one of my favourite subjects: social networks and the flow of information. The author, C Thi Nguyen, is an assistant professor of philosophy at Utah Valley University and distinguishes between two things that he things have been conflated:

Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.
Teasing things apart a bit, Nguyen gives some definitions:
Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission.

[…]

An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited.

[…]

In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. The way to break an echo chamber is not to wave “the facts” in the faces of its members. It is to attack the echo chamber at its root and repair that broken trust.

It feels like towards the end of my decade as an active user of Twitter there was a definite shift from it being an ‘epistemic bubble’ towards being an ‘echo chamber’. My ‘Personal Learning Network’ (or ‘PLN’) seemed to be a bit more militant in its beliefs.

Nguyen goes on to talk at length about fake news, sociological theories, and Cartesian epistemology. Where he ends up, however, is where I would: trust.

As Elijah Millgram argues in The Great Endarkenment (2015), modern knowledge depends on trusting long chains of experts. And no single person is in the position to check up on the reliability of every member of that chain. Ask yourself: could you tell a good statistician from an incompetent one? A good biologist from a bad one? A good nuclear engineer, or radiologist, or macro-economist, from a bad one? Any particular reader might, of course, be able to answer positively to one or two such questions, but nobody can really assess such a long chain for herself. Instead, we depend on a vastly complicated social structure of trust. We must trust each other, but, as the philosopher Annette Baier says, that trust makes us vulnerable. Echo chambers operate as a kind of social parasite on that vulnerability, taking advantage of our epistemic condition and social dependency.
That puts us a double-bind. We need to make ourselves vulnerable in order to participate in a society built on trust, but that very vulnerability puts us at danger of being manipulated.

I see this in fanatical evangelism of blockchain solutions to the ‘problem’ of operating in a trustless environment. To my mind, we need to be trusting people more, not less. Of course, there are obvious exceptions, but breaches of trust are near the top of the list of things we should punish most in a society.

Is there anything we can do, then, to help an echo-chamber member to reboot? We’ve already discovered that direct assault tactics – bombarding the echo-chamber member with ‘evidence’ – won’t work. Echo-chamber members are not only protected from such attacks, but their belief systems will judo such attacks into further reinforcement of the echo chamber’s worldview. Instead, we need to attack the root, the systems of discredit themselves, and restore trust in some outside voices.
So the way forward is for people to develop empathy and to show trust. Not present people with evidence that they're wrong. That's never worked in the past, and it won't work now. Our problem isn't a deficit in access to information, it's a deficit in trust.

Source: Aeon (via Ian O’Byrne)

Trust and the cult of your PLN

This is a long article with a philosophical take on one of my favourite subjects: social networks and the flow of information. The author, C Thi Nguyen, is an assistant professor of philosophy at Utah Valley University and distinguishes between two things that he things have been conflated:

Let’s call them echo chambers and epistemic bubbles. Both are social structures that systematically exclude sources of information. Both exaggerate their members’ confidence in their beliefs. But they work in entirely different ways, and they require very different modes of intervention. An epistemic bubble is when you don’t hear people from the other side. An echo chamber is what happens when you don’t trust people from the other side.
Teasing things apart a bit, Nguyen gives some definitions:
Current usage has blurred this crucial distinction, so let me introduce a somewhat artificial taxonomy. An ‘epistemic bubble’ is an informational network from which relevant voices have been excluded by omission.

[…]

An ‘echo chamber’ is a social structure from which other relevant voices have been actively discredited.

[…]

In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. The way to break an echo chamber is not to wave “the facts” in the faces of its members. It is to attack the echo chamber at its root and repair that broken trust.

It feels like towards the end of my decade as an active user of Twitter there was a definite shift from it being an ‘epistemic bubble’ towards being an ‘echo chamber’. My ‘Personal Learning Network’ (or ‘PLN’) seemed to be a bit more militant in its beliefs.

Nguyen goes on to talk at length about fake news, sociological theories, and Cartesian epistemology. Where he ends up, however, is where I would: trust.

As Elijah Millgram argues in The Great Endarkenment (2015), modern knowledge depends on trusting long chains of experts. And no single person is in the position to check up on the reliability of every member of that chain. Ask yourself: could you tell a good statistician from an incompetent one? A good biologist from a bad one? A good nuclear engineer, or radiologist, or macro-economist, from a bad one? Any particular reader might, of course, be able to answer positively to one or two such questions, but nobody can really assess such a long chain for herself. Instead, we depend on a vastly complicated social structure of trust. We must trust each other, but, as the philosopher Annette Baier says, that trust makes us vulnerable. Echo chambers operate as a kind of social parasite on that vulnerability, taking advantage of our epistemic condition and social dependency.
That puts us a double-bind. We need to make ourselves vulnerable in order to participate in a society built on trust, but that very vulnerability puts us at danger of being manipulated.

I see this in fanatical evangelism of blockchain solutions to the ‘problem’ of operating in a trustless environment. To my mind, we need to be trusting people more, not less. Of course, there are obvious exceptions, but breaches of trust are near the top of the list of things we should punish most in a society.

Is there anything we can do, then, to help an echo-chamber member to reboot? We’ve already discovered that direct assault tactics – bombarding the echo-chamber member with ‘evidence’ – won’t work. Echo-chamber members are not only protected from such attacks, but their belief systems will judo such attacks into further reinforcement of the echo chamber’s worldview. Instead, we need to attack the root, the systems of discredit themselves, and restore trust in some outside voices.
So the way forward is for people to develop empathy and to show trust. Not present people with evidence that they're wrong. That's never worked in the past, and it won't work now. Our problem isn't a deficit in access to information, it's a deficit in trust.

Source: Aeon (via Ian O’Byrne)

The role of Lady Luck

This post on Of Dollars and Data is a bit rambling, at least from my perspective, but I did like this paragraph:

Think about the story you tell yourself about yourselfIn all the lives you could be living, in all of the worlds you could simulate, how much did luck play a role in this one? Have you gotten more than your fair share? Have you had to deal with more struggles than most? I ask you this question because accepting luck as a primary determinant in your life is one of the most freeing ways to view the world. Why? Because when you realize the magnitude of happenstance and serendipity in your life, you can stop judging yourself on your outcomes and start focusing on your efforts. It’s the only thing you can control.

I think this chimes well with Stoic philosophy: focus on the things within you control. There are going to be times in all of our lives when bad things happen. Conversely, there are going to be times when good things happen. We can't control anything apart from our reactions to these things.

Source: Of Dollars and Data

The role of Lady Luck

This post on Of Dollars and Data is a bit rambling, at least from my perspective, but I did like this paragraph:

Think about the story you tell yourself about yourselfIn all the lives you could be living, in all of the worlds you could simulate, how much did luck play a role in this one? Have you gotten more than your fair share? Have you had to deal with more struggles than most? I ask you this question because accepting luck as a primary determinant in your life is one of the most freeing ways to view the world. Why? Because when you realize the magnitude of happenstance and serendipity in your life, you can stop judging yourself on your outcomes and start focusing on your efforts. It’s the only thing you can control.

I think this chimes well with Stoic philosophy: focus on the things within you control. There are going to be times in all of our lives when bad things happen. Conversely, there are going to be times when good things happen. We can't control anything apart from our reactions to these things.

Source: Of Dollars and Data

Issue #303: Rest your weary head

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Altruism

“Idealistic as it may sound, altruism should be the driving force in business, not just competition and a desire for wealth.”

(Dalai Lama)

Work-life balance is actually a circle, according to Jeff Bezos

Whatever your thoughts about Amazon, it’s hard to disagree that they’ve changed the world. Their CEO, Jeff Bezos, has some thoughts about what’s usually termed ‘work-life balance’:

This work-life harmony thing is what I try to teach young employees and actually senior executives at Amazon too. But especially the people coming in. I get asked about work-life balance all the time. And my view is, that’s a debilitating phrase because it implies there’s a strict trade-off. And the reality is, if I am happy at home, I come into the office with tremendous energy. And if I am happy at work, I come home with tremendous energy.
Of course, if you work from home (as I do) being happy at home is crucial to being happy at work.

I like his metaphor of a circle, about it not being a trade-off or ‘balance’:

It actually is a circle; it’s not a balance. And I think that is worth everybody paying attention to it. You never want to be that guy — and we all have a coworker who’s that person — who as soon as they come into a meeting they drain all the energy out of the room. You can just feel the energy go whoosh! You don’t want to be that guy. You want to come into the office and give everyone a kick in their step.
All of the most awesome people I know have nothing like a work-life 'balance'. Instead, they work hard, play hard, and tie that to a mission bigger than themselves.

Whether that’s true for the staff on targets in Amazon warehouses is a different matter, of course. But for knowledge workers, I think it’s spot-on.

Source: Chicago Tribune

Nothing better to do

“Work is the refuge of people who have nothing better to do.”

(Oscar Wilde)

The virtue of rest

This article in The Washington Post is, inevitably, focused on American work culture. However, I think it’s more widely applicable, even if we are a bit more chilled out in Europe.

Many victories of the labor movement were premised on the precise notion that the majority of one’s life shouldn’t be made up of work: It was the socialist Robert Owen who championed the eight-hour workday, coining the slogan “Eight hours labour, eight hours recreation, eight hours rest.” For Owen, it was important not only that workers had time to sleep after a hard day’s labor, but also that they had time to pursue their own interests — to enjoy leisure activities, cultivate their own projects, spend time with their families and so forth. After all, a life with nothing but work and sleep is akin to slavery, and not particularly dignified.
Most mornings, I wake up rested and ready for work. Like most people, there are some mornings that I don't. Unsurprisingly, the mornings when I don't feel ready for work are those that follow days when I've had to do more work-related tasks than usual.
There’s a balance to be struck where it comes to work and rest, but in the United States, values and laws are already slanted drastically in favor of work. I would advise those concerned about Americans’ dignity, freedom and independence to not focus on compelling work for benefits or otherwise trying to marshal people into jobs when what they really need are health care, housing assistance, unemployment benefits and so forth.
I'm reading Utopia for Realists at the moment, which has some excellent suggestions. It presents some startlingly-simple, well-researched ways forward. I think my favourite part is where the author, Rutger Bregman, points out that people who are in need require direct help, rather than complex schemes.

The same goes with our so-called ‘work-life’ balance. What we actually need for a flourishing, healthy society and democracy is more rest. As Alex Pang, author of the book Rest notes, leisure is usually framed these days as a way to get more work done. Instead, we should value it for its own sake.

Source: The Washington Post

Tolerating uncertainty

Although claims about the ‘unprecedented’ times we live in can be overblown, I think it’s reasonable to state that we exist in an uncertain world.

This article by Kristin Wong in The Cut talks about the importance of being able to tolerate uncertainty, as this “improves our decisions, promotes empathy, and boosts creativity,” — according to Jamie Holmes, a Future Tense Fellow at New America and author of the book, Nonsense: The Power of Not Knowing.

Uncertainty can create cognitive dissonance, the discomfort of holding two contradictory thoughts, opinions, or beliefs. Ironically, though, not being able to deal with uncertainty can be equally distressing. An intolerance of uncertainty is linked to anxiety and depression. So how do you get better at tolerating it?
The article suggests that you start off with a quiz to ascertain your tolerance to ambiguity and uncertainty. However, life is short, so I'd skip that and move onto the meat of the article.

We’re better or worse at tolerating uncertainty and ambiguity in different situations. It’s not like we have a single emotional gear.

There are certain times you might be extra susceptible to certainty, Holmes suggests. “Our need for closure is heightened when we’re rushed, bored, tired, or tipsy,” he said. So when you’re feeling any of those things, or maybe all of them, be aware that you might be prone to cognitive closure at that time.

Your desire for certainty probably also varies depending on the situation. You might be anxious over your bank account, for instance, but you don’t really care how you did on your performance review. Pinpoint these concerns, then avoid what Michel Dugas, a professor of psychology at the University of Quebec in Outaouais, calls “certainty seeking behavior.”

In order to improve our relationship with uncertainty, we need to get our of our comfort zone, and out of our heads.

“Two ways to get comfortable with uncertainty, perhaps surprisingly, are reading fiction and multicultural experiences,” Holmes says. “Make reading short stories or novels a habit. Likely because it invites us inside the worlds and minds of characters unlike ourselves, fiction makes ‘otherness’ less threatening.” He adds that both fiction and multicultural experiences not only lower our need for closure and help us make better decisions, but they also make us more empathetic. Research, like this 2010 study, shows that multicultural experiences fuel creativity, too.

Travel, reading, learning a new language, experiencing another culture — these all present new experiences to your brain, which force you outside of your comfort zone in rewarding ways. Also: They are fun. Sounds like a pretty certain win-win.

I've actually read Holmes' book. I'm not sure whether it's because I'm a Philosophy graduate who's already done some work on ambiguity, but I found it underwhelming. It is worth, however, thinking about ways in which we can all deal with uncertainty.

Source: The Cut (via Stowe Boyd)

Alexa for Kids as babysitter?

I’m just on my way out if the house to head for Scotland to climb some mountains with my wife.

But while she does (what I call) her ‘last minute faffing’ I read Dan Hon’s newsletter. I’ll just quite the relevant section without any attempt at comment or analysis.

He includes references in his newsletter, but you’ll just have to click through for those.

Mat Honan reminded me that Amazon have made an Alexa for Kids (during the course of which Tom Simonite had a great story about Alexa diligently and non-plussedly educating a group of preschoolers about the history of FARC after misunderstanding their requests for farts) and Honan has a great article about it. There are now enough Alexa (plural?) out there that the phenomenon of "the funny things kids say to Alexa" is pretty well documented as well as the earlier "Alexa is teaching my kid to be rude" observation. This isn't to say that Amazon haven't done *any* work thinking about how Alexa works in a kid context (Honan's article shows that they've demonstrably thought about how Alexa might work and that they've made changes to the product to accommodate children as a specific class of user) but the overwhelming impression I had after reading Honan's piece was that, as a parent, I still don't think Amazon haven't gone far enough in making Alexa kid-friendly.

They’ve made some executive decisions like coming down hard on curation versus algorithmic selection of content (see James Bridle’s excellent earlier essay on YouTube, that something is wrong on the internet and recent coverage of YouTube Kids' content selection method still finding ways to recommend, shall we say, videos espousing extreme views). And Amazon have addressed one of the core reported issues of having an Alexa in the house (the rudeness) by designing in support for a “magic word” Easter Egg that will reward kids for saying “please”. But that seems rather tactical and dealing with a specific issue and not, well, foundational. I think that the foundational issue is something more like this: parenting is a very personal subject. As I have become a parent, I have discovered (and validated through experimental data) that parents have very specific views about how to do things! Many parents do not agree with each other! Parents who agree with each other on some things do not agree on other things! In families where there are two parents there is much scope for disagreement on both desired outcome and method!

All of which is to say is that the current design, architecture and strategy of Alexa for Kids indicates one sort of one-size-fits-all method and that there’s not much room for parental customization. This isn’t to say that Amazon are actively preventing it and might not add it down the line - it’s just that it doesn’t really exist right now. Honan’s got a great point that:

“[For example,] take the magic word we mentioned earlier. There is no universal norm when it comes to what’s polite or rude. Manners vary by family, culture, and even region. While “yes, sir” may be de rigueur in Alabama, for example, it might be viewed as an element of the patriarchy in parts of California.”

Some parents may have very specific views on how they want to teach their kids to be polite. This kind of thinking leads me down the path of: well, are we imagining a world where Alexa or something like it is a sort of universal basic babysitter, with default norms and those who can get, well, customization? Or what someone else might call: attentive, individualized parenting?

When Alexa for Kids came out, I did about 10 seconds' worth of thinking and, based on how Alexa gets used in our house (two parents, a five year old and a 19 month old) and how our preschooler is behaving, I was pretty convinced that I’m in no way ready or willing to leave him alone with an Alexa for Kids in his room. My family is, in what some might see as that tedious middle class way, pretty strict about the amount of screen time our kids get (unsupervised and supervised) and suffice it to say that there’s considerable difference of opinion between my wife and myself on what we’re both comfortable with and at what point what level of exposure or usage might be appropriate.

And here’s where I reinforce that point again: are you okay with leaving your kids with a default babysitter, or are you the kind of person who has opinions about how you want your babysitter to act with your kids? (Yes, I imagine people reading this and clutching their pearls at the mere thought of an Alexa “babysitting” a kid but need I remind you that books are a technological object too and the issue here is in the degree of interactivity and access). At least with a babysitter I can set some parameters and I’ve got an idea of how the babysitter might interact with the kids because, well, that’s part of the babysitter screening process.

Source: Things That Have Caught My Attention s5e11

Getting on the edtech bus

As many people will be aware, the Open University (OU) is going through a pretty turbulent time in its history. As befitting the nature of the institution, a lot of conversations about its future are happening in public spaces.

Martin Weller, a professor at the university, has been vocal. In this post, a response to a keynote from Tony Bates, he offers a way forward.

I would like to... propose a new role: Sensible Ed Tech Advisor. Job role is as follows:
  • Ability to offer practical advice on adoption of ed tech that will benefit learners
  • Strong BS detector for ed tech hype
  • Interpreter of developing trends for particular context
  • Understanding of the intersection of tech and academic culture
  • Communicating benefits of any particular tech in terms that are valuable to educators and learners
  • Appreciation of ethical and social impact of ed tech
(Lest that sound like I’m creating a job description for myself, I didn’t add “interest in ice hockey” at the end, so you can tell that it isn’t)
Weller notes that Bates mentioned in his his post-keynote write-up that the OU has a "fixation on print as the ‘core’ medium/technology". He doesn't think that's correct.

I’m interested in this, because the view of an institution is formed not only by the people inside it, but by the press and those who have an opinion and an audience. Weller accuses Bates of being woefully out of date. I think he’s correct to call him out on it, as I’ve witnessed recently a whole host of middle-aged white guys lazily referencing things in presentations they haven’t bothered to research very well.

 It is certainly true that some disciplines do have a print preference, and Tony is correct to say that often a print mentality is transferred to online. But what this outdated view (it was probably true 10-15 years ago) suggests is a ‘get digital or else’ mentality. Rather, I would argue, we need to acknowledge the very good digital foundation we have, but find ways to innovate on top of this.

If you are fighting an imaginary analogue beast, then this becomes difficult. For instance, Tony does rightly highlight how we don’t make enough use of social media to support students, but then ignores that there are pockets of very good practice, for example the OU PG Education account and the use of social media in the Cisco courses. Rolling these out across the university is not simple, but it is the type of project that we know how to realise. But by framing the problem as one of wholesale structural, cultural change starting from a zero base, it makes achieving practical, implementable projects difficult. You can’t do that small(ish) thing until we’ve done these twenty big things.

We seem to be living at a time when those who were massive, uncritical boosters of technology in education (and society in general) are realising the errors of their ways. I actually wouldn’t count Weller as an uncritical booster, but I welcome the fact that he is self-deprecating enough to include himself in that crowd.

I would also suggest that the sort of “get on the ed tech bus or else” argument that Tony puts forward is outdated, and ineffective (I’ve been guilty of it myself in the past). And as Audrey Watters highlights tirelessly, an unsceptical approach to ed tech is problematic for many reasons. Far more useful is to focus on specific problems staff have, or things they want to realise, than suggest they just ‘don’t get it’. Having an appreciation for this intersection between ed tech (coming from outside the institution and discipline often) and the internal values and culture is also an essential ingredient in implementing any technology successfully.
This is a particularly interesting time in the history of technology in education and society. I'm glad that conversations like this are happening in the open.

Source: Martin Weller

Bootstraps

"You can't pull yourself up by your bootstraps if you have no boots."

(Joseph Hanlon)

Space as a service

This isn’t the most well-written post I’ve read this year, but it does point to a shift that I’ve noticed — perhaps because I work remotely.

Increasingly we are moving to an almost post consumer world where we are less bothered about accumulating more stuff and much more interested in being provided with services, experiences and ephemeral pleasures.

So Uber instead of Cars, Spotify instead of CD’s, Netflix instead of DVD’s: on-demand this, on-demand that. Why bother to own something you seldom use, that becomes out of date rapidly, or that you really cannot afford. Rent it when you need it.

Some might think that these are things ‘Millennials’ do, but if that generation is defined as those born from 1980 onwards then some of those are almost 40 years old. It’s not a trend that’s going away.

When you’re used to paying monthly for software, streaming music and films instead of buying them, and renting accommodation (because you’re priced out of the housing market), then you start thinking differently about the world.

Just as it is now easy to buy almost any Software as a Service, so it will become with real estate. Space, as a Service, is the future of real estate. On demand and where you buy exactly the features, and services, you need, whenever and wherever you are.

Key though is that this extends beyond spaces rented on-demand; regardless of tenure it will become important to be able to also rent or purchase on-demand all the services one might need to make the most of your space, or to enable the most productive use of that space.

So for businesses who employ people who can do most of what they do from anywhere, the problem becomes co-ordination rather than office space. Former Mozilla colleague John O’Duinn makes this point in his upcoming book.

We really do not NEED offices anymore, we really do not NEED shops anymore. In fact we really do not NEED an awful lot of real estate. That is not to say we don’t WANT these spaces, but what we do in them will change.
So companies like WeWork are already huge, and continue to grow rapidly.
So how will all this change supply?

Well you have people who:

  • Prefer services over products
  • Don't need to go to an office to work
  • Are used to on-demand
  • And are uber connected with vast computing power in their pocket.
The answer, to me, has to be #Space As a Service - space that takes account of these four trends. Space that is specifically designed to allow humans to do what they are good at.
I think this is a hugely exciting time. I'm just hoping that we see a similar revolution around equity, both in terms of diversity within organisations and shared ownership of them.

Source: Antony Slumbers

Blockchain as a 'futuristic integrity wand'

I’ve no doubt that blockchain technology is useful for super-boring scenarios and underpinning get-rich-quick schemes, but it has very little value to the scenarios in which I work. I’m trying to build trust, not work in an environment where technology serves as a workaround.

This post by Kai Stinchcombe about the blockchain bubble is a fantastic read. The author’s summary?

Blockchain is not only crappy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms, and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: no matter how much blockchain improves it is still headed in the wrong direction.

Fair enough, let's dig in...

People have made a number of implausible claims about the future of blockchain—like that you should use it for AI in place of the type of behavior-tracking that google and facebook do, for example. This is based on a misunderstanding of what a blockchain is. A blockchain isn’t an ethereal thing out there in the universe that you can “put” things into, it’s a specific data structure: a linear transaction log, typically replicated by computers whose owners (called miners) are rewarded for logging new transactions.

It's funny seeing people who have close to zero understanding of how blockchain works explain how it's going to 'revolutionise' X, Y, or Z. Again, it's got exciting applicability... for very boring stuff.

[H]ere’s what blockchain-the-technology is: “Let’s create a very long sequence of small files — each one containing a hash of the previous file, some new data, and the answer to a difficult math problem — and divide up some money every hour among anyone willing to certify and store those files for us on their computers.”

Now, here’s what blockchain-the-metaphor is: “What if everyone keeps their records in a tamper-proof repository not owned by anyone?”

This is the bit that really grabbed me about the post, the blockchain-as-metaphor section. People are sold on stories, not on technologies. Which is why some people are telling stories that involve magicking away all of their fears and problems with a magic blockchain wand.

People treat blockchain as a “futuristic integrity wand”—wave a blockchain at the problem, and suddenly your data will be valid. For almost anything people want to be valid, blockchain has been proposed as a solution.

It’s true that tampering with data stored on a blockchain is hard, but it’s false that blockchain is a good way to create data that has integrity.

[...]

Blockchain systems do not magically make the data in them accurate or the people entering the data trustworthy, they merely enable you to audit whether it has been tampered with. A person who sprayed pesticides on a mango can still enter onto a blockchain system that the mangoes were organic. A corrupt government can create a blockchain system to count the votes and just allocate an extra million addresses to their cronies. An investment fund whose charter is written in software can still misallocate funds.

When, like me, you think that humanity moves forward at the speed of trust and collaboration, blockchain seems like the antithesis of all that.

Projects based on the elimination of trust have failed to capture customers’ interest because trust is actually so damn valuable. A lawless and mistrustful world where self-interest is the only principle and paranoia is the only source of safety is a not a paradise but a crypto-medieval hellhole.

Source: Kai Stinchcombe

Profit vs benefit

“The difference between profit and benefit is that operations producing profit can be carried out by another in my place: he would make the profit, unless he was acting on my behalf. But the fact remains that profitable activity can always be carried out by someone else. Hence the principle of competition. On the other hand, what is beneficial to me depends on gestures, acts, living moments which it would be impossible for me to delegate.”

(Frédéric Gros, A Philosophy of Walking)

Issue #302: Read aloud for maximum effect

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

What can dreams of a communist robot utopia teach us about human nature?

This article in Aeon by Victor Petrov posits that, in the post-industrial age, we no longer see human beings as primarily manual workers, but as thinkers using digital screens to get stuff done. What does that do to our self-image?

The communist parties of eastern Europe grappled with this new question, too. The utopian social order they were promising from Berlin to Vladivostok rested on the claim that proletarian societies would use technology to its full potential, in the service of all working people. Bourgeois information society would alienate workers even more from their own labour, turning them into playthings of the ruling classes; but a socialist information society would free Man from drudgery, unleash his creative powers, and enable him to ‘hunt in the morning … and criticise after dinner’, as Karl Marx put it in 1845. However, socialist society and its intellectuals foresaw many of the anxieties that are still with us today. What would a man do in a world of no labour, and where thinking was done by machines?
Bulgaria was a communist country that, after the Second World War, went from producing cigarettes to being one of the world's largest producers of computers. This had a knock-on effect on what people wrote about in the country.
The Bulgarian reader was increasingly treated to debates about what humanity would be in this new age. Some, such as the philosopher Mityu Yankov, argued that what set Man apart from the animals was his ability to change and shape nature. For thousands of years, he had done this through physical means and his own brawn. But the Industrial Revolution had started a change of Man’s own nature, which was culminating with the Information Revolution – humanity now was becoming not a worker but a ‘governor’, a master of nature, and the means of production were not machines or muscles, but the human brain.
Lyuben Dilov, a popular sci-fi author, focused on "the boundaries between man and machine, brain and computer". His books were full of societies obsessed with technology.
Added to this, there is technological anxiety, too – what is it to be a man when there are so many machines? Thus, Dilov invents a Fourth Law of Robotics, to supplement Asimov’s famous three, which states that ‘the robot must, in all circumstances, legitimate itself as a robot’. This was a reaction by science to the roboticists’ wish to give their creations ever more human qualities and appearance, making them subordinate to their function – often copying animal or insect forms. Zenon muses on human interactions with robots that start from a young age, giving the child power over the machine from the outset. This undermines our trust in the very machines on which we depend. Humans need a distinction from the robots, they need to know that they are always in power and couldn’t be lied to. For Dilov, the anxiety was about the limits of humanity, at least in its current stage – fearful, humans could not yet treat anything else, including their machines, as equals.
This all seems very pertinent at a time when deepfakes make us question what is real online. We're perhaps less worried about a Blade Runner-style dystopia and more concerned about digital 'reality' but, nevertheless, questions about what it means to be human persist.
Bulgarian robots were both to be feared and they were the future. Socialism promised to end meaningless labour but reproduced many of the anxieties that are still with us today in our ever-automating world. What can Man do that a machine cannot do is something we still haven’t solved. But, like Kesarovski, perhaps we need not fear this new world so much, nor give up our reservations for the promise of a better, easier world.
Source: Aeon

Escaping from the crush of circumstances

“Today I escaped from the crush of circumstances, or better put, I threw them out, for the crush wasn’t from outside me but in my own assumptions.”
(Marcus Aurelius)

The benefits of reading aloud to children

This article in the New York Times by Perri Klass, M.D. focuses on studies that show a link between parents reading to their children and a reduction in problematic behaviour.

This study involved 675 families with children from birth to 5; it was a randomized trial in which 225 families received the intervention, called the Video Interaction Project, and the other families served as controls. The V.I.P. model was originally developed in 1998, and has been studied extensively by this research group.

Participating families received books and toys when they visited the pediatric clinic. They met briefly with a parenting coach working with the program to talk about their child’s development, what the parents had noticed, and what they might expect developmentally, and then they were videotaped playing and reading with their child for about five minutes (or a little longer in the part of the study which continued into the preschool years). Immediately after, they watched the videotape with the study interventionist, who helped point out the child’s responses.

I really like the way that they focus on the positives and point out how much the child loves the interaction with their parent through the text.

The Video Interaction Project started as an infant-toddler program, working with low-income urban families in New York during clinic visits from birth to 3 years of age. Previously published data from a randomized controlled trial funded by the National Institute of Child Health and Human Development showed that the 3-year-olds who had received the intervention had improved behavior — that is, they were significantly less likely to be aggressive or hyperactive than the 3-year-olds in the control group.

I don't know enough about the causes of ADHD to be able to comment, but as a teacher and parent, I do know there's a link between the attention you give and the attention you receive.

“The reduction in hyperactivity is a reduction in meeting clinical levels of hyperactivity,” Dr. Mendelsohn said. “We may be helping some children so they don’t need to have certain kinds of evaluations.” Children who grow up in poverty are at much higher risk of behavior problems in school, so reducing the risk of those attention and behavior problems is one important strategy for reducing educational disparities — as is improving children’s language skills, another source of school problems for poor children.

It is a bit sad when we have to encourage parents to play with children between the ages of birth and three, but I guess in the age of smartphone addiction, we kind of have to.

Source: The New York Times

Image CC BY Jason Lander

You need more daylight to sleep better

An an historian, I’ve often been fascinated about what life must have been like before the dawn of electricity. I have a love-hate relationship with artificial light. On the one hand, I use a lightbox to stave off Seasonal Affective Disorder. On the other hand, I’ve got (my optician tells me) not only pale blue irises but very thin corneas. That makes me photophobic and subject to the kind of glare on a regular basis I can only imagine ‘normal’ people get after staring at a lightbulb for a while.

In this article, Linda Geddes describes an experiment in which she decided to forgo artificial life for a number of weeks to see what effect it had on her health and, most importantly, her sleep.

Working with sleep researchers Derk-Jan Dijk and Nayantara Santhi at the University of Surrey, I designed a programme to go cold-turkey on artificial light after dark, and to try to maximise exposure to natural light during the day – all while juggling an office job and busy family life in urban Bristol.
By the end of 2017, instead of having to manually install something like f.lux on my devices, they all started to have it built-in. There's a general realisation that blue light before bedtime is a bad idea. What this article points out, however, is another factor: how bright the light is that you're subjected to during the day.
Light enables us to see, but it affects many other body systems as well. Light in the morning advances our internal clock, making us more lark-like, while light at night delays the clock, making us more owlish. Light also suppresses a hormone called melatonin, which signals to the rest of the body that it’s night-time – including the parts that regulate sleep. “Apart from vision, light has a powerful non-visual effect on our body and mind, something to remember when we stay indoors all day and have lights on late into the night,” says Santhi, who previously demonstrated that the evening light in our homes suppresses melatonin and delays the timing of our sleep.
The important correlation here is between the strength of light Geddes experienced during her waking hours, and the quality of her sleep.
But when I correlated my sleep with the amount of light I was exposed to during the daytime, an interesting pattern emerged. On the brightest days, I went to bed earlier. And for every 100 lux increase in my average daylight exposure, I experienced an increase in sleep efficiency of almost 1% and got an extra 10 minutes of sleep.
This isn't just something that Geddes has experienced; studies have also found this kind of correlation.
In March 2007, Dijk and his colleagues replaced the light bulbs on two floors of an office block in northern England, housing an electronic parts distribution company. Workers on one floor of the building were exposed to blue-enriched lighting for four weeks; those on the other floor were exposed to white light. Then the bulbs were switched, meaning both groups were ultimately exposed to both types of light. They found that exposure to the blue-enriched white light during daytime hours improved the workers’ subjective alertness, performance, and evening fatigue. They also reported better quality and longer sleep.
So the key takeaway message?
It’s ridiculously simple. But spending more time outdoors during the daytime and dimming the lights in the evening really could be a recipe for better sleep and health. For millennia, humans have lived in synchrony with the Sun. Perhaps it's time we got reacquainted.
Source: BBC Future

On the cultural value of memes

I’ve always been a big fan of memes. In fact, I discuss them in my thesis, ebook, and TEDx talk. This long-ish article from Jay Owens digs into their relationship with fake news and what he calls ‘post-authenticity’. What I’m really interested in, though, comes towards the end. He gets into the power of memes and why they’re the perfect form of online cultural expression.

So through humour, exaggeration, and irony — a truth emerges about how people are actually feeling. A truth that they may not have felt able to express straightforwardly. And there’s just as much, and potentially more, community present in these groups as in many of the more traditional civic-oriented groups Zuckerberg’s strategy may have had in mind.
The thing that can be missing from text-based interactions is empathy. The right kind of meme, however, speaks using images, words, but also to something else that a group have in common.
Meme formats — from this week’s American Chopper dialectic model to now classics like the “Exploding Brain,” “Distracted Boyfriend,” and “Tag Yourself” templates — are by their very nature iterative and quotable. That is how the meme functions, through reference to the original context and memes that have come before, coupled with creative remixing to speak to a particular audience, topic, or moment. Each new instance of a meme is thereby automatically familiar and recognisable. The format carries a meta-message to the audience: “This is familiar, not weird.” And the audience is prepared to know how to react: you like, you respond with laughter-referencing emoji, you tag your friends in the comments.
Let's take this example, that Owens cites in the article. I sent it to my wife via Telegram, which an instant messaging app that we use as a permanent backchannel). 90s kids

Her response, inevitably was: 😂

It’s funny because it’s true. But it also quickly communicates solidarity and empathy.

The format acts as a kind of Trojan horse, then, for sharing difficult feelings — because the format primes the audience to respond hospitably. There isn’t that moment of feeling stuck over how to respond to a friend’s emotional disclosure, because she hasn’t made the big statement directly, but instead through irony and cultural quotation — distancing herself from the topic through memes, typically by using stock photography (as Leigh Alexander notes) rather than anything as gauche as a picture of oneself. This enables you the viewer to sidestep the full intensity of it in your response, should you choose, but still, crucially, to respond). And also to DM your friend and ask, “Hey, are you alright?” and cut to the realtalk should you so choose to.
So, effectively, you can be communicating different things to different people. If, instead of sending the 90s kids image above directly to my wife via Telegram, I'd shared it to my Twitter followers, it may have elicited a different response. Some people would have liked and retweeted it, for sure, but someone who knows me well might ask if I'm OK. After all, there's a subtext in there of feeling like you're "stuck".

Owens goes on to talk about how that memetic culture means that we’re living in a ‘post authentic’ world. But did such authenticity ever really exist?

So perhaps to say that this post-authentic moment is one of evolving, increasingly nuanced collective communication norms, able to operate with multi-layered recursive meanings and ironies in disposable pop culture content… is kind of cold comfort.

Nonetheless, author Robin Sloan described the genius of the “American Chopper” meme as being that “THIS IS THE ONLY MEME FORMAT THAT ACKNOWLEDGES THE EXISTENCE OF COMPETING INFORMATION, AND AS SUCH IT IS THE ONLY FORMAT SUITED TO THE COMPLEXITY OF OUR WORLD!”

Amen to that.

Source: Jay Owens

The résumé is a poor proxy for a human being

I’ve never been a fan of the résumé, or ‘Curriculum Vitae’ (CV) as we tend to call them in the UK. How on earth can a couple of sheets of paper ever hope to sum up an individual in all of their complexity? It inevitably leads to the kind of things that end up on LinkedIn profiles: your academic qualifications, job history, and a list of hobbies that don’t make you sound like a loser.

In this (long-ish) article for Quartz, Oliver Staley looks at what Laszlo Bock is up to with his new startup, with a detour through the history of the résumé.

“Resumes are terrible,” says Laszlo Bock, the former head of human resources at Google, where his team received 50,000 resumes a week. “It doesn’t capture the whole person. At best, they tell you what someone has done in the past and not what they’re capable of doing in the future.”

I really dislike résumés, and I’m delighted that I’ve managed to get my last couple of jobs without having to rely on them. I guess that’s a huge benefit of working openly; the web is your résumé.

Resumes force job seekers to contort their work and life history into corporately acceptable versions of their actual selves, to better conform to the employer’s expectation of the ideal candidate. Unusual or idiosyncratic careers complicate resumes. Gaps between jobs need to be accounted for. Skills and abilities learned outside of formal work or education aren’t easily explained. Employers may say they’re looking for job seekers to distinguish themselves, but the resume requires them to shed their distinguishing characteristics.

Unfortunately, Henry Ford’s ‘faster horses’ rule also applies to résumés. And (cue eye roll) people need to find a way to work in buzzwords like ‘blockchain’.

The resume of the near future will be a document with far more information—and information that is far more useful—than the ones we use now. Farther out, it may not be a resume at all, but rather a digital dossier, perhaps secured on the blockchain (paywall), and uploaded to a global job-pairing engine that is sorting you, and billions of other job seekers, against millions of openings to find the perfect match.

I’m more interested in different approaches, rather than doubling-down on the existing approach, so it’s good to see large multinational companies like Unilever doing away with résumés. They prefer game-like assessments.

Two years ago, the North American division of Unilever—the consumer products giant—stopped asking for resumes for the approximately 150-200 positions it fills from college campuses annually. Instead, it’s relying on a mix of game-like assessments, automated video interviews, and in-person problem solving exercises to winnow down the field of 30,000 applicants.

It all sounds great but, at the end of the day it’s extra unpaid work, and more jumping through hoops.

The games are designed so there are no wrong answers— a weakness in one characteristic, like impulsivity, can reveal strength in another, like efficiency—and pymetrics gives candidates who don’t meet the standards for one position the option to apply for others at the company, or even at other companies. The algorithm matches candidates to the opportunities where they’re most likely to succeed. The goal, Polli says, is to eliminate the “rinse and repeat” process of submitting near identical applications for dozens of jobs, and instead use data science to target the best match of job and employee.

Back to Laszlo Bock, who claims that we should have an algorithmic system that matches people to available positions. I’m guessing he hasn’t read Brave New World.

For the system to work, it would need an understanding of a company’s corporate culture, and how people actually function within its walls—not just what the company says about its culture. And employees and applicants would need to be comfortable handing over their personal data.

For-profit entities wouldn’t be trusted as stewards of such sensitive information. Nor would governments, Bock says, noting that in communist Romania, where he was born, “the government literally had dossiers on every single citizen.”

Ultimately, Bock says, the system should be maintained by a not-for-profit, non-governmental organization. “What I’m imagining, no human being should ever look inside this thing. You shouldn’t need to,” he says.

Hiring people is a social activity. The problem of having too many applicants is a symptom of a broken system. This might sound crazy, but I feel like hierarchical structures and a lack of employee ownership causes some of the issues we see. Then, of course, there's much wider issues such as neo-colonialism, commodification, and bullshit jobs. But that's for another post (or two)...

Source: Quartz at Work

OEP (Open Educational Pragmatism?)

This is an interesting post to read, not least because I sat next to the author at the conference he describes last week, and we had a discussion about related issues. Michael Shaw, who’s a great guy and I’ve known for a few years, is in charge of Tes Resources.

I wondered if I would feel like an interloper at the first conference I’ve ever attended on Open Educational Resources (OERs).

It wasn’t a dress code issue (though in hindsight I should have worn trainers) but that most of the attendees at #OER18 were from universities, while only a few of us there worked for education businesses.

Shaw notes he was wary in attending the conference, not only because it's a fairly tight-knit community:
I work for a commercial company, one that makes money from advertising and recruitment services, plus — even more controversially in this context — by letting teachers sell resources to each other, and taking a percentage on transactions.
However, he found the hosts and participants "incredibly welcoming" and the debates "more open than [he'd] expected on how commercial organisations could play a part" in the ecosystem.

Shaw is keen to point out that the Tes Resources site that he manages is “a potential space for OER-sharing”. He goes on to talk about how he’s an ‘OER pragmatist’ rather than an ‘OER purist’. As a former journalist, Shaw is a great writer. However, I want to tease apart some things I think he conflates.

In his March 2018 post announcing the next phase of development for Tes Resources, Shaw announced that the goal was to create “a community of authors providing high-quality resources for educators”. He conflates that in this post with educators sharing Open Educational Resources. I don’t think the two things are the same, and that’s not because I’m an ‘OER purist’.

The concern that I, and others in the Open Education community, have around commercial players in ecosystem is the tendency to embrace, extend, and extinguish:

  1. Embrace: Development of software substantially compatible with a competing product, or implementing a public standard.
  2. Extend: Addition and promotion of features not supported by the competing product or part of the standard, creating interoperability problems for customers who try to use the 'simple' standard.
  3. Extinguish: When extensions become a de facto standard because of their dominant market share, they marginalize competitors that do not or cannot support the new extensions.
So, think of Twitter before they closed their API: a thousand Twitter clients bloomed, and innovations such as pull-to-refresh were invented. Then Twitter decided to 'own the experience' of users and changed their API so that those third-party clients withered.

Tes Resources, Shaw admitted to me, doesn’t even have an API. It’s a bit like Medium, the place he chose to publish this post. If he’d written the post in something like WordPress, he’d be notified of my reply via web standard technologies. Medium doesn’t adhere to those standards. Nor does Tes Resources. It’s a walled garden.

My call, then, would be for Tes Resources to develop an API so that services such as the MoodleNet project I’m leading, can query and access it. Up until then, it’s not a repository. It’s just another silo.

Source: Michael Shaw

Image: CC BY Jess

Everything is potentially a meme

Despite — or perhaps because of — my feelings towards the British monarchy, this absolutely made my day:

Town crier meme - library Town crier meme - Virgin media

Isn’t the internet great?

Source: Haha

How to be super-productive

Not a huge sample size, but this article has studied what makes ‘super-productive’ people tick:

We collected data on over 7,000 people who were rated by their manager on their level of their productivity and 48 specific behaviors. Each person was also rated by an average of 11 other people, including peers, subordinates, and others. We identified the specific behaviors that were correlated with high levels of productivity — the top 10% in our sample — and then performed a factor analysis.
Here's the list of seven things that came out of the study:
  1. Set stretch goals
  2. Show consistency
  3. Have knowledge and technical expertise
  4. Drive for results
  5. Anticipate and solve problems
  6. Take initiative
  7. Be collaborative
In my experience, you could actually just focus on helping people with three things:
  • Show up
  • Be proactive
  • Collaborate
That's certainly been my experience of high-performers over my career so far!

Source: Harvard Business Review (via Ian O’Byrne)

Thinking outdoors

“We do not belong to those who have ideas only among books, when stimulated by books. It is our habit to think outdoors — walking, leaping, climbing, dancing, preferably on lonely mountains or near the sea where even the trails become thoughtful.” (Friedrich Nietzsche)

Issue #301: Endless horse

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Clickbait and switch?

Should you design for addiction or for loyalty? That’s the question posed by Michelle Manafy in this post for Nieman Lab. It all depends, she says, on whether you’re trying to attract users or an audience.

With advertising as the primary driver of web revenue, many publishers have chased the click dragon. Seeking to meet marketers’ insatiable desire for impressions, publishers doubled down on quick clicks. Headlines became little more than a means to a clickthrough, often regardless of whether the article would pay off or even if the topic was worthy of coverage. And — since we all know there are still plenty of publications focusing on hot headlines over substance — this method pays off. In short-term revenue, that is.

However, the reader experience that shallow clicks deliver doesn’t develop brand affinity or customer loyalty. And the negative consumer experience has actually been shown to extend to any advertising placed in its context. Sure, there are still those seeking a quick buck — but these days, we all see clickbait for what it is.

Audiences mature over time and become wary of particular approaches. Remember “…and you’ll not believe what came next” approaches?

Ask Manafy notes, it’s much easier to design for addiction than to build an audience. The former just requires lots and lots of tracking — something at which the web has become spectacularly good at, due to advertising.

For example, many push notifications are specifically designed to leverage the desire for human interaction to generate clicks (such as when a user is alerted that their friend liked an article). Push notifications and alerts are also unpredictable (Will we have likes? Mentions? New followers? Negative comments?). And this unpredictability, or B.F. Skinner’s principle of variable rewards, is the same one used in those notoriously addictive slot machines. They’re also lucrative — generating more revenue in the U.S. than baseball, theme parks, and movies combined. A pull-to-refresh even smacks of a slot machine lever.
The problem is that designing for addiction isn't a long-term strategy. Who plays Farmville these days? And the makers of Candy Crush aren't exactly crushing it with their share price these days.
Sure, an addict is “engaged” — clicking, liking, swiping — but what if they discover that your product is bad for them? Or that it’s not delivering as much value as it does harm? The only option for many addicts is to quit, cold turkey. Sure, many won’t have the willpower, and you can probably generate revenue off these users (yes, users). But is that a long-term strategy you can live with? And is it a growth strategy, should the philosophical, ethical, or regulatory tide turn against you?
The 'regulatory tide' referenced here is exemplified through GDPR, which is already causing a sea change in attitude towards user data. Compliance with teeth, it seems, gets results.

Designing for sustainability isn’t just good from a regulatory point of view, it’s good for long-term business, argues Manafy:

Where addiction relies on an imbalanced and unstable relationship, loyal customers will return willingly time and again. They’ll refer you to others. They’ll be interested in your new offerings, because they will already rely on you to deliver. And, as an added bonus, these feelings of goodwill will extend to any advertising you deliver too. Through the provision of quality content, delivered through excellent experiences at predictable and optimal times, content can become a trusted ally, not a fleeting infatuation or unhealthy compulsion.
Instead of thinking of your audience as 'users' waiting for their next hit, she suggests, think of them as your audience. That's a much better approach and will help you make much better design decisions.

Source: Nieman Lab

Read for freedom

"Once you learn to read, you will be forever free."


(Frederick Douglass)

Soviet-era industrial design

While the prospects of me learning the Russian language anytime soon are effectively zero, I do have a soft spot for the country. My favourite novels are 19th century Russian fiction, the historical time period I’m most fond of is the Russian revolutions of 1917*, and I really like some of the designs that came out of Bolshevik and Stalinist Russia. (That doesn’t mean I condone the atrocities, of course.)

The Soviet era, from 1950 onwards, isn’t really a time period I’ve studied in much depth. I taught it as a History teacher as part of a module on the Cold War, but that was very much focused on the American and British side of things. So I’ve missed out on some of the wonderful design that came out of that time period. Here’s a couple of my favourites featured in this article. I may have to buy the book it mentions!

Soviet radio Soviet textiles

Source: Atlas Obscura

Conversational implicature

In references for jobs, former employers are required to be positive. Therefore, a reference that focuses on how polite and punctual someone is could actually be a damning indictment of their ability. Such ‘conversational implicature’ is the focus of this article:

When we convey a message indirectly like this, linguists say that we implicate the meaning, and they refer to the meaning implicated as an implicature. These terms were coined by the British philosopher Paul Grice (1913-88), who proposed an influential account of implicature in his classic paper ‘Logic and Conversation’ (1975), reprinted in his book Studies in the Way of Words (1989). Grice distinguished several forms of implicature, the most important being conversational implicature. A conversational implicature, Grice held, depends, not on the meaning of the words employed (their semantics), but on the way that the words are used and interpreted (their pragmatics).
From my point of view, this is similar to the difference between productive and unproductive ambiguity.
The distinction between what is said and what is conversationally implicated isn’t just a technical philosophical one. It highlights the extent to which human communication is pragmatic and non-literal. We routinely rely on conversational implicature to supplement and enrich our utterances, thus saving time and providing a discreet way of conveying sensitive information. But this convenience also creates ethical and legal problems. Are we responsible for what we implicate as well as for what we actually say?
For example, and as the article notes, "shall we go upstairs?" can mean a sexual invitation, which may or may not later imply consent. It's a tricky area.

I’ve noted that the more technically-minded a person, the less they use conversational implicature. In addition, and I’m not sure if this is true or just my own experience, I’ve found that Americans tend to be more literal in their communication than Europeans.

 To avoid disputes and confusion, perhaps we should use implicature less and communicate more explicitly? But is that recommendation feasible, given the extent to which human communication relies on pragmatics?
To use conversational implicature is human. It can be annoying. It can turn political. But it's an extremely useful tool, and certainly lubricates us all rubbing along together.

Source: Aeon

Ryan Holiday's 13 daily life-changing habits

Articles like this are usually clickbait with two or three useful bits of advice that you’ve already read elsewhere, coupled with some other random things to pad it out. That’s not the case with Ryan Holiday’s post, which lists:

  1. Prepare for the hours ahead
  2. Go for a walk
  3. Do the deep work
  4. Do a kindness
  5. Read. Read. Read.
  6. Find true quiet
  7. Make time for strenuous exercise
  8. Think about death
  9. Seize the alive time
  10. Say thanks — to the good and bad
  11. Put the day up for review
  12. Find a way to connect to something big
  13. Get eight hours of sleep
I'm doing pretty well on all of these at the moment, except perhaps number eleven. I used to 'call myself into the office' each month. Perhaps I should start doing that again?

 

Source: Thought Catalog

Valuing and signalling your skills

When I rocked up to the MoodleMoot in Miami back in November last year, I ran a workshop that involved human spectrograms, post-it notes, and participatory activities. Although I work in tech and my current role is effectively a product manager for Moodle, I still see myself primarily as an educator.

This, however, was a surprise for some people who didn’t know me very well before I joined Moodle. As one person put it, “I didn’t know you had that in your toolbox”. The same was true at Mozilla; some people there just saw me as a quasi-academic working on web literacy stuff.

Given this, I was particularly interested in a post from Steve Blank which outlined why he enjoys working with startup-like organisations rather than large, established companies:

It never crossed my mind that I gravitated to startups because I thought more of my abilities than the value a large company would put on them. At least not consciously. But that’s the conclusion of a provocative research paper, Asymmetric Information and Entrepreneurship, that explains a new theory of why some people choose to be entrepreneurs. The authors’ conclusion — Entrepreneurs think they are better than their resumes show and realize they can make more money by going it alone.And in most cases, they are right.
If you stop and think for a moment, it's entirely obvious that you know your skills, interests, and knowledge better than anyone who hires you for a specific role. Ordinarily, they're interested in the version of you that fits the job description, rather than you as a holistic human being.

The paper that Blank cites covers research which followed 12,686 people over 30+ years. It comes up with seven main findings, but the most interesting thing for me (given my work on badges) is the following:

If the authors are right, the way we signal ability (resumes listing education and work history) is not only a poor predictor of success, but has implications for existing companies, startups, education, and public policy that require further thought and research.
It's perhaps a little simplistic as a binary, but Blank cites a 1970s paper that uses 'lemons' and 'cherries' as a metaphors to compare workers:
Lemons Versus Cherries. The most provocative conclusion in the paper is that asymmetric information about ability leads existing companies to employ only “lemons,” relatively unproductive workers. The talented and more productive choose entrepreneurship. (Asymmetric Information is when one party has more or better information than the other.) In this case the entrepreneurs know something potential employers don’t – that nowhere on their resume does it show resiliency, curiosity, agility, resourcefulness, pattern recognition, tenacity and having a passion for products.

This implication, that entrepreneurs are, in fact, “cherries” contrasts with a large body of literature in social science, which says that the entrepreneurs are the “lemons”— those who cannot find, cannot hold, or cannot stand “real jobs.”

My main takeaway from this isn’t necessarily that entrepreneurship is always the best option, but that we’re really bad at signalling abilities and finding the right people to work with. I’m convinced that using digital credentials can improve that, but only if we use them in transformational ways, rather than replicate the status quo.

Source: Steve Blank

Intimate data analytics in education

The ever-relevant and compulsively-readable Ben Williamson turns his attention to ‘precision education’ in his latest post. It would seem that now that the phrase ‘personalised learning’ has jumped the proverbial shark, people are doubling down on the rather dangerous assumption that we just need more data to provide better learning experiences.

In some ways, precision education looks a lot like a raft of other personalized learning practices and platform developments that have taken shape over the past few years. Driven by developments in learning analytics and adaptive learning technologies, personalized learning has become the dominant focus of the educational technology industry and the main priority for philanthropic funders such as Bill Gates and Mark Zuckerberg.

[…]

A particularly important aspect of precision education as it is being advocated by others, however, is its scientific basis. Whereas most personalized learning platforms tend to focus on analysing student progress and outcomes, precision education requires much more intimate data to be collected from students. Precision education represents a shift from the collection of assessment-type data about educational outcomes, to the generation of data about the intimate interior details of students’ genetic make-up, their psychological characteristics, and their neural functioning.

As Williamson points out, the collection of ‘intimate data’ is particularly concerning, particularly in the wake of the Cambridge Analytica revelations.

Many people will find the ideas behind precision education seriously concerning. For a start, there appear to be some alarming symmetries between the logics of targeted learning and targeted advertising that have generated heated public and media attention already in 2018. Data protection and privacy are obvious risks when data are collected about people’s private, intimate and interior lives, bodies and brains. The ethical stakes in using genetics, neural information and psychological profiles to target students with differentiated learning inputs are significant.
There's a very definite worldview which presupposes that we just need to throw more technology at a problem until it goes away. That may be true in some situations, but at what cost? And to what extent is the outcome an artefact of the constraints of the technologies? Hopefully my own kids will be finished school before this kind of nonsense becomes mainstream. I do, however, worry about my grandchildren.
The technical machinery alone required for precision education would be vast. It would have to include neurotechnologies for gathering brain data, such as neuroheadsets for EEG monitoring. It would require new kinds of tests, such as those of personality and noncognitive skills, as well as real-time analytics programs of the kind promoted by personalized-learning enthusiasts. Gathering intimate data might also require genetics testing technologies, and perhaps wearable-enhanced learning devices for capturing real-time data from students’ bodies as proxy psychometric measures of their responses to learning inputs and materials.
Thankfully, Williamson cites the work of academics who are proposing a different way forward. Something that respects the social aspect of learning rather than a reductionist view that focuses on inputs and outputs.
One productive way forward might be to approach precision education from a ‘biosocial’ perspective. As Deborah Youdell  argues, learning may be best understood as the result of ‘social and biological entanglements.’ She advocates collaborative, inter-disciplinary research across social and biological sciences to understand learning processes as the dynamic outcomes of biological, genetic and neural factors combined with socially and culturally embedded interactions and meaning-making processes. A variety of biological and neuroscientific ideas are being developed in education, too, making policy and practice more bio-inspired.
The trouble is, of course, is that it's not enough for academics to write papers about things. Or even journalists to write newspaper articles. Even with all of the firestorm over Facebook recently, people are still using the platform. If the advocates of 'precision education'  have their way, I wonder who will actually create something meaningful that opposes their technocratic worldview?

Source: Code Acts in Education

All killer, no filler

This short posts cites a talk entitled 10 Timeframes given by Paul Ford back in 2012:

Ford asks a deceivingly simple question: when you spend a portion of your life (that is, your time) working on a project, do you take into account how your work will consume, spend, or use portions of other lives? How does the ‘thing’ you are working on right now play out in the future when there are “People using your systems, playing with your toys, [and] fiddling with your abstractions”?
In the talk, Ford mentions that in a 200-seat auditorium, his speaking for an extra minute wastes over three hours of human time, all told. Not to mention those who watch the recording, of course.

When we’re designing things for other people, or indeed working with our colleagues, we need to think not only about our own productivity but how that will impact others. I find it sad when people don’t do the extra work to make it easier for the person they have the power to impact. That could be as simple as sending an email that, you know, includes the link to the think being referenced. Or it could be an entire operating system, a building, or a new project management procedure.

I often think about this when editing video: does this one-minute section respect the time of future viewers? A minute multiplied by the number of times a video might be video suddenly represents a sizeable chunk of collective human resources. In this respect, ‘filler’ is irresponsible: if you know something is not adding value or meaning to future ‘consumers,’ then you are, in a sense, robbing life from them. It seems extreme to say that, yes, but hopefully the contemplating the proposition has not wasted your time.
My son's at an age where he's started to watch a lot of YouTube videos. Due to the financial incentives of advertising, YouTubers fill the first minute (at least) with tell you what you're going to find out, or with meaningless drivel. Unfortunately, my son's too young to have worked that out for himself yet. And at eleven years old, you can't just be told.

In my own life and practice, I go out of my way to make life easier for other people. Ultimately, of course, it makes life easier for me. By modelling behaviours that other people can copy, you’re more likely to be the recipient of time-saving practices and courteous behaviour. I’ve still a lot to learn, but it’s nice to be nice.

Source: James Shelley (via Adam Procter)

All killer, no filler

This short posts cites a talk entitled 10 Timeframes given by Paul Ford back in 2012:

Ford asks a deceivingly simple question: when you spend a portion of your life (that is, your time) working on a project, do you take into account how your work will consume, spend, or use portions of other lives? How does the ‘thing’ you are working on right now play out in the future when there are “People using your systems, playing with your toys, [and] fiddling with your abstractions”?
In the talk, Ford mentions that in a 200-seat auditorium, his speaking for an extra minute wastes over three hours of human time, all told. Not to mention those who watch the recording, of course.

When we’re designing things for other people, or indeed working with our colleagues, we need to think not only about our own productivity but how that will impact others. I find it sad when people don’t do the extra work to make it easier for the person they have the power to impact. That could be as simple as sending an email that, you know, includes the link to the think being referenced. Or it could be an entire operating system, a building, or a new project management procedure.

I often think about this when editing video: does this one-minute section respect the time of future viewers? A minute multiplied by the number of times a video might be video suddenly represents a sizeable chunk of collective human resources. In this respect, ‘filler’ is irresponsible: if you know something is not adding value or meaning to future ‘consumers,’ then you are, in a sense, robbing life from them. It seems extreme to say that, yes, but hopefully the contemplating the proposition has not wasted your time.
My son's at an age where he's started to watch a lot of YouTube videos. Due to the financial incentives of advertising, YouTubers fill the first minute (at least) with tell you what you're going to find out, or with meaningless drivel. Unfortunately, my son's too young to have worked that out for himself yet. And at eleven years old, you can't just be told.

In my own life and practice, I go out of my way to make life easier for other people. Ultimately, of course, it makes life easier for me. By modelling behaviours that other people can copy, you’re more likely to be the recipient of time-saving practices and courteous behaviour. I’ve still a lot to learn, but it’s nice to be nice.

Source: James Shelley (via Adam Procter)

Do what you can

“Do what you can, with what you have, where you are.”

(Theodore Roosevelt)

Systems thinking and AI

Edge is an interesting website. Its aim is:

To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves.
One recent article on the site is from Mary Catherine Bateson, a writer and cultural anthropologist who retired in 2004 from her position as Professor in Anthropology and English at George Mason University. She's got some interesting insights into systems thinking and artificial intelligence.
We all think with metaphors of various sorts, and we use metaphors to deal with complexity, but the way human beings use computers and AI depends on their basic epistemologies—whether they’re accustomed to thinking in systemic terms, whether they’re mainly interested in quantitative issues, whether they’re used to using games of various sorts. A great deal of what people use AI for is to simulate some pattern outside in the world. On the other hand, people use one pattern in the world as a metaphor for another one all the time.
That's such an interesting way of putting it, the insinuation being that some people have epistemologies (theories of knowledge) that are not really nuanced enough to deal with the world in all of its complexity. As a result, they use reductive metaphors that don't really work that well. This is obviously problematic when dealing with AI that you want to do some work for you, hence the bias (racism, sexism) which has plagued the field.
One of the most essential elements of human wisdom at its best is humility, knowing that you don’t know everything. There’s a sense in which we haven’t learned how to build humility into our interactions with our devices. The computer doesn’t know what it doesn’t know, and it's willing to make projections when it hasn’t been provided with everything that would be relevant to those projections. How do we get there? I don’t know. It’s important to be aware of it, to realize that there are limits to what we can do with AI. It’s great for computation and arithmetic, and it saves huge amounts of labor. It seems to me that it lacks humility, lacks imagination, and lacks humor. It doesn’t mean you can’t bring those things into your interactions with your devices, particularly, in communicating with other human beings. But it does mean that elements of intelligence and wisdom—I like the word wisdom, because it's more multi-dimensional—are going to be lacking.
Something I always say is that technology is not neutral and that anyone who claims it to be so is a charlatan. Technologies are always designed by a person, or group of people, for a particular purpose. That person, or people, has hopes, fears, dreams, opinions, and biases. Therefore, AI has limits.
You don’t have to know a lot of technical terminology to be a systems thinker. One of the things that I’ve been realizing lately, and that I find fascinating as an anthropologist, is that if you look at belief systems and religions going way back in history, around the world, very often what you realize is that people have intuitively understood systems and used metaphors to think about them. The example that grabbed me was thinking about the pantheon of Greek gods—Zeus and Hera, Apollo and Demeter, and all of them. I suddenly realized that in the mythology they’re married, they have children, the sun and the moon are brother and sister. There are quarrels among the gods, and marriages, divorces, and so on. So you can use the Greek pantheon, because it is based on kinship, to take advantage of what people have learned from their observation of their friends and relatives.
I like the way that Bateson talks about the difference between computer science and systems theory. It's a bit like the argument I gave about why kids need to learn to code back in 2013: it's more about algorithmic thinking than it is about syntax.
The tragedy of the cybernetic revolution, which had two phases, the computer science side and the systems theory side, has been the neglect of the systems theory side of it. We chose marketable gadgets in preference to a deeper understanding of the world we live in.
The article is worth reading in its entirety, as Bateson goes off at tangents that make it difficult to quote sections here. It reminds me that I need to revisit the work of Donella Meadows.

Source: Edge

Issue #300: Tricentennial

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

The four things you need to become an intellectual

I came across this, I think, via one of the aggregation sites I skim. It’s a letter in the form of an article by Paul J. Griffiths, who is a Professor of Catholic Theology at Duke Divinity School. In it, he replies to a student who has asked how to become an intellectual.

Griffiths breaks it down into four requirements, and then at the end gives a warning.

The first requirement is that you find something to think about. This may be easy to arrive at, or almost impossibly difficult. It’s something like falling in love. There’s an infinite number of topics you might think about, just as there’s an almost infinite number of people you might fall in love with. But in neither case is the choice made by consulting all possibilities and choosing among them. You can only love what you see, and what you see is given, in large part, by location and chance.
There's a tension here, isn't there? Given the almost infinite multiplicity of things it's possible to spend life thinking about and concentrating upon, how does one choose between them? Griffiths mentions the role of location and chance, but I'd also through in tendencies. If you notice yourself liking a particular style of art, captivated by a certain style of writing, or enthralled by a way of approaching the world, this may be a clue that you should investigate it further.
The second requirement is time: You need a life in which you can spend a minimum of three uninterrupted hours every day, excepting sabbaths and occasional vacations, on your intellectual work. Those hours need to be free from distractions: no telephone calls, no email, no texts, no visits. Just you. Just thinking and whatever serves as a direct aid to and support of thinking (reading, writing, experiment, etc.). Nothing else. You need this because intellectual work is, typically, cumulative and has momentum. It doesn’t leap from one eureka moment to the next, even though there may be such moments in your life if you’re fortunate. No, it builds slowly from one day to the next, one month to the next. Whatever it is you’re thinking about will demand of you that you think about it a lot and for a long time, and you won’t be able to do that if you’re distracted from moment to moment, or if you allow long gaps between one session of work and the next. Undistracted time is the space in which intellectual work is done: It’s the space for that work in the same way that the factory floor is the space for the assembly line.
This chimes with a quotation from Mark Manson I referenced yesterday, in which he talks about the joy you feel and meaning you experience when you've spent decades dedicated to one thing in particular. You have to carve out time for that, whether through your occupation, or through putting aside leisure time to pursue it.
The third requirement is training. Once you know what you want to think about, you need to learn whatever skills are necessary for good thinking about it, and whatever body of knowledge is requisite for such thinking. These days we tend to think of this as requiring university studies.

[…]

The most essential skill is surprisingly hard to come by. That skill is attention. Intellectuals always think about something, and that means they need to know how to attend to what they’re thinking about. Attention can be thought of as a long, slow, surprised gaze at whatever it is.

[…]

The long, slow, surprised gaze requires cultivation. We’re quickly and easily habituated, with the result that once we’ve seen something a few times it comes to seem unsurprising, and if it’s neither threatening nor useful it rapidly becomes invisible. There are many reasons for this (the necessities of survival; the fact of the Fall), but whatever a full account of those might be (“full account” being itself a matter for thinking about), their result is that we can’t easily attend.

This section was difficult to quote as it weaves in specific details from the original student’s letter, but the gist is that people assume that universities are good places for intellectual pursuits. Griffiths responds that this may or may not be the case, and, in fact, is less likely to be true as the 21st century progresses.

Instead, we need to cultivate attention, which he describes as being almost like a muscle. Griffiths suggests “intentionally engaging in repetitive activity” such as “practicing a musical instrument, attending Mass daily, meditating on the rhythms of your breath, taking the same walk every day (Kant in Königsberg)” to “foster attentiveness”.

[The] fourth requirement is interlocutors. You can’t develop the needed skills or appropriate the needed body of knowledge without them. You can’t do it by yourself. Solitude and loneliness, yes, very well; but that solitude must grow out of and continually be nourished by conversation with others who’ve thought and are thinking about what you’re thinking about. Those are your interlocutors. They may be dead, in which case they’ll be available to you in their postmortem traces: written texts, recordings, reports by others, and so on. Or they may be living, in which case you may benefit from face-to-face interactions, whether public or private. But in either case, you need them. You can neither decide what to think about nor learn to think about it well without getting the right training, and the best training is to be had by apprenticeship: Observe the work—or the traces of the work—of those who’ve done what you’d like to do; try to discriminate good instances of such work from less good; and then be formed by imitation.
I talked in my thesis about the impossibility of being 'literate' unless you've got a community in which to engage in literate practices. The same is true of intellectual activity: you can't be an intellectual in a vacuum.

As a society, we worship at the altar of the lone genius but, in fact, that idea is fundamentally flawed. Progress and breakthroughs come through discussion and collaboration, not sitting in a darkened room by yourself with a wet tea-towel over your head, thinking very hard.

Interestingly, and importantly, Griffiths points out to the student to whom he’s replying that the life of an intellectual might seem attractive, but that it’s a long, hard road.

And lastly: Don’t do any of the things I’ve recommended unless it seems to you that you must. The world doesn’t need many intellectuals. Most people have neither the talent nor the taste for intellectual work, and most that is admirable and good about human life (love, self-sacrifice, justice, passion, martyrdom, hope) has little or nothing to do with what intellectuals do. Intellectual skill, and even intellectual greatness, is as likely to be accompanied by moral vice as moral virtue. And the world—certainly the American world—has little interest in and few rewards for intellectuals. The life of an intellectual is lonely, hard, and usually penurious; don’t undertake it if you hope for better than that. Don’t undertake it if you think the intellectual vocation the most important there is: It isn’t. Don’t undertake it if you have the least tincture in you of contempt or pity for those without intellectual talents: You shouldn’t. Don’t undertake it if you think it will make you a better person: It won’t. Undertake it if, and only if, nothing else seems possible.
A long read, but a rewarding one.

Source: First Things

Craig Mod's subtle redesign of the hardware Kindle

I like Craig Mod’s writing. He’s the guy that’s written on his need to walk, drawing his own calendar, and getting his attention back.

This article is hardware Kindle devices — the  distinction being important given that you can read your books via the Kindle Cloud Reader or, indeed, via an app on pretty much any platform.

As he points out, the user interface remains sub-optimal:

Tap most of the screen to go forward a page. Tap the left edge to go back. Tap the top-ish area to open the menu. Tap yet another secret top-right area to bookmark. This model co-opts the physical space of the page to do too much.

The problem is that the text is also an interface element. But it’s a lower-level element. Activated through a longer tap. In essence, the Kindle hardware and software team has decided to “function stack” multiple layers of interface onto the same plane.

And so this model has never felt right.

He suggests an alternative to this which involves physical buttons on the device itself:

Hardware buttons:

  • Page forward
  • Page back
  • Menu
  • (Power/Sleep)

What does this get us?

It means we can now assume that — when inside of a book — any tap on the screen is explicitly to interact with content: text or images within the text. This makes the content a first-class object in the interaction model. Right now it’s secondary, engaged only if you tap and hold long enough on the screen. Otherwise, page turn and menu invocations take precedence.

I can see why he proposes this, but I'm not so sure about the physical buttons for page turns. The reason I'd say that, is that although I now use a Linux-based bq Cervantes e-reader, before 2015 I had almost every iteration of the hardware Kindle. There's a reason Amazon removed hardware buttons for page turns.

I read in lots of places, but I read in bed with my wife every day and if there’s one thing she couldn’t stand, it was the clicking noise of me turning the page on my Kindle. Even if I tried to press it quietly, it annoyed her. Touchscreen page turns are much better.

The e-reader I use has a similar touch interaction to the Kindle, so I see where Craig Mod is coming from when he says:

When content becomes the first-class object, every interaction is suddenly bounded and clear. Want the menu? Press the (currently non-existent) menu button towards the top of the Kindle. Want to turn the page? Press the page turn button. Want to interact with the text? Touch it. Nothing is “hidden.” There is no need to discover interactions. And because each interaction is clear, it invites more exploration and play without worrying about losing your place.

This, if you haven't come across it before, is user interface design, or UI design for short. It's important stuff, for as Steve Jobs famously said: "Everything in this world... was created by people no smarter than you" — and that's particularly true in tech.

Source: Craig Mod

Profiting from your enemies

While I don’t feel like I’ve got any enemies, I’m sure there’s plenty of people who don’t like me, for whatever reason. I’ve never thought about framing it this way, though:

In Plutarch’s “How to Profit by One’s Enemies,” he advises that rather than lashing out at your enemies or completely ignoring them, you should study them and see if they can be useful to you in some way. He writes that because our friends are not always frank and forthcoming with us about our shortcomings, “we have to depend on our enemies to hear the truth.” Your enemy will point out your weak spots for you, and even if he says something untrue, you can then analyze what made him say it.

People close to us don't want to offend or upset us, so they don't point out areas where we could improve. So we should take negative comments and, rather than 'feed the trolls' use it as a way to get better (without even ever referencing the 'enemy').

Source: Austin Kleon

The root of all happiness

“Without acknowledging the ever-present gaze of death, the superficial will appear important, and the important will appear superficial. Death is the only thing we can know with any certainty. And as such, it must be the compass by which we orient all our other values and decisions. It is the correct answer to all of the questions we should ask but never do. The only way to be comfortable with death is to understand and see yourself as something bigger than yourself; to choose values that stretch beyond serving yourself, that are simple and immediate and controllable and tolerant of the chaotic world around you. This is the basic root of all happiness.”

(Mark Manson)

Random Street View does exactly what you think it does

Today’s a non-work day for me but, after reviewing resource-centric social media sites as part of my Moodle work yesterday, I rediscovered the joy of StumbleUpon.

That took me to lots of interesting sites which, if you haven’t used the service before, become more relevant to your tastes as time goes on if you use the thumbs up / thumbs down tool.

I came across this Random Street View site which I’ve a sneaking suspicion I’ve seen before. Not only is it a fascinating way to ‘visit’ lesser-known parts of the world, it also shows the scale of Google’s Street View programme.

The teacher in me imagines using this as the starting point for some kind of project. It could be a writing prompt, you could use it to randomly find somewhere to do some research on, or it could even be an art project.

Great stuff.

Source: Random Street View

Long-term investments

“To truly appreciate something, you must confine yourself to it. There’s a certain level of joy and meaning that you reach in life only when you’ve spent decades investing in a single relationship, a single craft, a single career. And you cannot achieve those decades of investment without rejecting the alternatives.”

(Mark Manson)

Deciding what to do next

This post by Daniel Gross, partner in a well-known startup accelerator is written for an audience of people in tech looking to build their next company. However, I think there’s more widely-applicable takeaways from it.

Gross mentions the following:

  1. If you want to make something grand, don’t start with grand ambitions
  2. Focus on the repeat offenders
  3. Tell your friends what you’re doing
  4. Make sure you enjoy thinking about it
  5. Get in the habit of simplifying
  6. Validate your market
  7. Launch uncomfortably quickly
To explain and unpack, point two is getting at those things that you think about every so often, those things you're curious about. Points six and seven are, of course, focused on putting products in a marketplace, but I think there's a way to think about this from a different perspective.

Take someone who’s looking for the next thing to do. Perhaps they’re dissatisfied with their current line of work, and so want to pursue opportunities in a different sector. It’s useful for them to look at what’s ‘normal’ (for example, teachers and lawyers work long hours). Once you’ve done your due diligence, it’s worth just getting started. Go and do something to set yourself on the road.

If there’s anything you remember from the post, let it be these two words: perpetual motion. Just Do It. Make little steps every day. One day that’ll add up to the next Google, Apple or Facebook.
...or, indeed, a role that you much prefer to the one you're performing now!

Source: Daniel Gross

Designing for privacy

Someone described the act of watching Mark Zuckerberg, CEO of Facebook, testifying before Congress as “low level self-harm”. In this post, Joe Edelman explains why:

Zuckerberg and the politicians—they imagine privacy as if it were a software feature. They imagine a system has “good privacy” if it’s consensual and configurable; that is, if people explicitly agree to something, and understand what they agree to, that’s somehow “good for privacy”. Even usually-sophisticated-analysts like Zeynep Tufekci are missing all the nuance here.

Giving the example of a cocktail party where you're talking to a friend about something confidential and someone else you don't know comes along, Edelman introduces this definition of privacy:
Privacy, n. Maintaining a sense of what to show in each environment; Locating social spaces for aspects of yourself which aren’t ready for public display, where you can grow those parts of yourself until they can be more public.
I really like this definition, especially the part around "locating social spaces for aspects of yourself which aren't ready for public display". I think educators in particular should note this.

Referencing his HSC1 Curriculum which is the basis for workshops he runs for staff from major tech companies, Edelman includes a graphic on the structural features of privacy. I’ll type this out here for the sake of legibility:

  • Relational depth (close friends / acquaintances / strangers / anonymous / mixed)
  • Presentation (crafted / basic / disheveled)
  • Connectivity (transient / pairwise / whole-group)
  • Stakes (high / low)
  • Status levels (celebrities / rank / flat)
  • Reliance (interdependent / independent)
  • Time together (none / brief / slow)
  • Audience size (big / small / unclear)
  • Audience loyalty (loyal / transient / unclear)
  • Participation (invited / uninvited)
  • Pretext (shared goal / shared values / shared topic / many goals (exchange) / emergent)
  • Social Gestures (like / friend / follow / thank / review / comment / join / commit / request / buy)
The post is, of course, both an expert response to the zeitgeist, and a not-too-subtle hint that people should take his course. I'm sure Edelman goes into more depth about each of these structural features in his workshops.

Nevertheless, and even without attending his sessions (which I’m sure are great) there’s value in thinking through each of these elements for the work I’m doing around the MoodleNet project. I’ve probably done some thinking around 70% of these, but it’s great to have a list that helps me organise my thinking a little more.

Source: Joe Edelman

Multiple income streams

Right now, I’m splitting my time between being employed (four days per week with Moodle), my consultancy and the co-op which I co-founded (one day per week combined). In other words, I have more than one income stream, as this article suggests:

Having multiple income streams can come in handy if one income stream dries up. After two years in business, I've learned that you'll always have peaks and valleys. Sometimes everyone is paying you, and sometimes your lead pipeline can look barren. I started a marketing and PR agency and spent that first year working my startup muscles: planning, strategizing, defining markets. If I hit a slow month, I kept working those same exercises. While it helped grow my business, I sometimes needed an intellectual rest day.

People who have only ever been employed (which was me until three years ago!) wonder about the insecurity of consulting. But the truth is that every occupation these days is precarious — it's just hidden if you're employed.

This is a short article, but it's useful as both a call-to-action and to reinforce existing practices:

Developing a secondary income stream is easier than you may think. Think about how you like to spend your off hours and research potential markets. Maybe you're really good at explaining something that is a difficult concept for other people--create a course on an on-demand training site like Udemy or Skillshare.

In general, we think more people are paying attention to us than they actually are. Your first endeavour doesn't have to set the world on fire, be a smash hit, or a bestseller. The important thing is to get out there and provide something that people want.

Through volunteering, putting myself out there, and developing my network, I haven't had to apply for a job since 2010. Also, with my consultancy, it's all inbound stuff. Some call it luck but, as Thomas Edison is quoted as saying:

Opportunity is missed by most people because it is dressed in overalls and looks like work.
I'd add that knowledge work doesn't look like work. But that's a whole other post.

Source: Inc.

Multiple income streams

Right now, I’m splitting my time between being employed (four days per week with Moodle), my consultancy and the co-op which I co-founded (one day per week combined). In other words, I have more than one income stream, as this article suggests:

Having multiple income streams can come in handy if one income stream dries up. After two years in business, I've learned that you'll always have peaks and valleys. Sometimes everyone is paying you, and sometimes your lead pipeline can look barren. I started a marketing and PR agency and spent that first year working my startup muscles: planning, strategizing, defining markets. If I hit a slow month, I kept working those same exercises. While it helped grow my business, I sometimes needed an intellectual rest day.

People who have only ever been employed (which was me until three years ago!) wonder about the insecurity of consulting. But the truth is that every occupation these days is precarious — it's just hidden if you're employed.

This is a short article, but it's useful as both a call-to-action and to reinforce existing practices:

Developing a secondary income stream is easier than you may think. Think about how you like to spend your off hours and research potential markets. Maybe you're really good at explaining something that is a difficult concept for other people--create a course on an on-demand training site like Udemy or Skillshare.

In general, we think more people are paying attention to us than they actually are. Your first endeavour doesn't have to set the world on fire, be a smash hit, or a bestseller. The important thing is to get out there and provide something that people want.

Through volunteering, putting myself out there, and developing my network, I haven't had to apply for a job since 2010. Also, with my consultancy, it's all inbound stuff. Some call it luck but, as Thomas Edison is quoted as saying:

Opportunity is missed by most people because it is dressed in overalls and looks like work.
I'd add that knowledge work doesn't look like work. But that's a whole other post.

Source: Inc.

In praise of ordinary lives

This richly-illustrated post uses as a touchstone the revolution in art that took place in the 17th century with Johannes Vermeer’s The Little Street. The painting (which can be seen above) moves away from epic and religious symbolism, and towards the everyday.

Unfortunately, and particularly with celebrity lifestyles on display everywhere, we seem to be moving back to pre-17th century approaches:

Today – in modern versions of epic, aristocratic, or divine art – adverts and movies continually explain to us the appeal of things like sports cars, tropical island holidays, fame, first-class air travel and expansive limestone kitchens. The attractions are often perfectly real. But the cumulative effect is to instill in us the idea that a good life is built around elements that almost no one can afford. The conclusion we too easily draw is that our lives are close to worthless.
A good life isn't one where you get everything you want; that would, in fact, that would be form of torture. Just ask King Midas. Instead, it's made up of lots of little things, as this post outlines:
There is immense skill and true nobility involved in bringing up a child to be reasonably independent and balanced; maintaining a good-enough relationship with a partner over many years despite areas of extreme difficulty; keeping a home in reasonable order; getting an early night; doing a not very exciting or well-paid job responsibly and cheerfully; listening properly to another person and, in general, not succumbing to madness or rage at the paradox and compromises involved in being alive.
As ever, a treasure trove of wisdom and I encourage you to explore further the work of the School of Life.

Source: The Book of Life

In praise of ordinary lives

This richly-illustrated post uses as a touchstone the revolution in art that took place in the 17th century with Johannes Vermeer’s The Little Street. The painting (which can be seen above) moves away from epic and religious symbolism, and towards the everyday.

Unfortunately, and particularly with celebrity lifestyles on display everywhere, we seem to be moving back to pre-17th century approaches:

Today – in modern versions of epic, aristocratic, or divine art – adverts and movies continually explain to us the appeal of things like sports cars, tropical island holidays, fame, first-class air travel and expansive limestone kitchens. The attractions are often perfectly real. But the cumulative effect is to instill in us the idea that a good life is built around elements that almost no one can afford. The conclusion we too easily draw is that our lives are close to worthless.
A good life isn't one where you get everything you want; that would, in fact, that would be form of torture. Just ask King Midas. Instead, it's made up of lots of little things, as this post outlines:
There is immense skill and true nobility involved in bringing up a child to be reasonably independent and balanced; maintaining a good-enough relationship with a partner over many years despite areas of extreme difficulty; keeping a home in reasonable order; getting an early night; doing a not very exciting or well-paid job responsibly and cheerfully; listening properly to another person and, in general, not succumbing to madness or rage at the paradox and compromises involved in being alive.
As ever, a treasure trove of wisdom and I encourage you to explore further the work of the School of Life.

Source: The Book of Life

Issue #299: Jersey shore

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Issue #299: Jersey shore

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Alienated life

“The less you eat, drink, buy books, go to the theatre or to balls, or to the pub, and the less you think, love, theorize, sing, paint, fence, etc., the more you will be able to save and the greater will become your treasure which neither moths nor rust will devour — your capital. The less you are, the less you express your own life, the more you have, the greater is your alienated life and the greater is the saving of your alienated being.”
(Karl Marx)

All that is gold does not glitter

"All that is gold does not glitter,
Not all those who wander are lost;
The old that is strong does not wither,
Deep roots are not reached by the frost.



From the ashes a fire shall be woken,
A light from the shadows shall spring;
Renewed shall be blade that was broken,
The crownless again shall be king."



(J.R.R. Tolkien)

The death of the newsfeed (is much exaggerated)

Benedict Evans is a venture capitalist who focuses on technology companies. He’s a smart guy with some important insights, and I thought his recent post about the ‘death of the newsfeed’ on social networks was particularly useful.

He points out that it’s pretty inevitable that the average person will, over the course of a few years, add a few hundred ‘friends’ to their connections on any given social network. Let’s say you’re connected with 300 people, and they all share five things each day. That’s 1,500 things you’ll be bombarded with, unless the social network does something about it.

This overload means it now makes little sense to ask for the ‘chronological feed’ back. If you have 1,500 or 3,000 items a day, then the chronological feed is actually just the items you can be bothered to scroll through before giving up, which can only be 10% or 20% of what’s actually there. This will be sorted by no logical order at all except whether your friends happened to post them within the last hour. It’s not so much chronological in any useful sense as a random sample, where the randomizer is simply whatever time you yourself happen to open the app. ’What did any of the 300 people that I friended in the last 5 years post between 16:32 and 17:03?’ Meanwhile, giving us detailed manual controls and filters makes little more sense - the entire history of the tech industry tells us that actual normal people would never use them, even if they worked. People don't file.

So we end up with algorithmic feeds, which is an attempt by social networks to ensure that you see the stuff that you deem important. It is, of course, an almost impossible mission.

[T]here are a bunch of problems around getting the algorithmic newsfeed sample ‘right’, most of which have been discussed at length in the last few years. There are lots of incentives for people (Russians, game developers) to try to manipulate the feed. Using signals of what people seem to want to see risks over-fitting, circularity and filter bubbles. People’s desires change, and they get bored of things, so Facebook has to keep changing the mix to try to reflect that, and this has made it an unreliable partner for everyone from Zynga to newspapers. Facebook has to make subjective judgements about what it seems that people want, and about what metrics seem to capture that, and none of this is static or even in in principle perfectible. Facebook surfs user behaviour.

Evans then goes on to raise the problem of what you want to see may be different from what your friends want you to see. So people solve the problem of algorithmic feeds not showing them what they really want by using messaging apps such as WhatsApp and Telegram to interact individually with people or small groups.

The problem with that, though?

The catch is that though these systems look like they reduce sharing overload, you really want group chats. And lots of groups. And when you have 10 WhatsApp groups with 50 people in each, then people will share to them pretty freely. And then you think ‘maybe there should be a screen with a feed of the new posts in all of my groups. You could call it a ‘news feed’. And maybe it should get some intelligence, to show the posts you care about most...

So, to Evans mind (and I'm tempted to agree with him) we're in a never-ending spiral. The only way I can see out of it is user education, particularly around owning one's own data and IndieWeb approaches.

Source: Benedict Evans

Absentee leadership

Leadership is a funny thing. There’s lots written about it, but, at the end of the day, it’s all about relationships.

I’ve worked for some great leaders, and some shitty managers. This Harvard Business Review article describes the usual three ways those in positions of power get things wrong:

The key derailment characteristics of bad managers are well documented and fall into three broad behavioral categories: (1) “moving away behaviors,” which create distance from others through hyper-emotionality, diminished communication, and skepticism that erodes trust; (2) “moving against behaviors,” which overpower and manipulate people while aggrandizing the self; and (3) “moving toward behaviors,” which include being ingratiating, overly conforming, and reluctant to take chances or stand up for one’s team.
But there's another, potentially even worse, category:
Absentee leaders are people in leadership roles who are psychologically absent from them. They were promoted into management, and enjoy the privileges and rewards of a leadership role, but avoid meaningful involvement with their teams. Absentee leadership resembles the concept of rent-seeking in economics — taking value out of an organization without putting value in. As such, they represent a special case of laissez-faire leadership, but one that is distinguished by its destructiveness.
The problem with absentee leaders, as the article explains, is that they rarely get weeded out. There's always more pressing problems to deal with. So the people who report to them exist in a professional feedback vacuum.
The chances are good, however, that your organization is unaware of its absentee leaders, because they specialize in flying under the radar by not doing anything that attracts attention. Nonetheless, the adhesiveness of their negative impact may be slowly harming the company.
If leadership is about relationships, then the worst leaders are those who show poor emotional intelligence, don't invest in building trust, and provide little constructive feedback. If you're in a position of leadership, it's worth thinking about this from the point of view of others who interact with you on a regular basis...

Source: Harvard Business Review

Social internet vs social media

It’s good to see Cal Newport, whose book Deep Work I found unexpectedly great last year, add a bit more nuance to his position on social media:

The young progressives grew up in a time when platform monopolies like Facebook were so dominant that they seemed inextricably intertwined into the fabric of the internet. To criticize social media, therefore, was to criticize the internet’s general ability to do useful things like connect people, spread information, and support activism and expression.

The older progressives, however, remember the internet before the platform monopolies. They were concerned to observe a small number of companies attempt to consolidate much of the internet into their for-profit, walled gardens.

To them, social media is not the internet. It was instead a force that was co-opting the internet — including the powerful capabilities listed above — in ways that would almost certainly lead to trouble.

Newport has started talking about the difference between ‘social media’ and the ‘social internet’:

The social internet describes the general ways in which the global communication network and open protocols known as “the internet” enable good things like connecting people, spreading information, and supporting expression and activism.

Social media, by contrast, describes the attempt to privatize these capabilities by large companies within the newly emerged algorithmic attention economy, a particularly virulent strain of the attention sector that leverages personal data and sophisticated algorithms to ruthlessly siphon users’ cognitive capital.

If you’d asked people in 2005, they would have said that there was no way that people would leave MySpace in favour of a different platform.

People like Facebook. But if you could offer them a similar alternative that stripped away the most unsavory elements of Zuckerberg’s empire (perhaps funded by a Wikipedia-style nonprofit collective, or a modest subscription fee), many would happily jump ship.
Indeed.

Following up with another this post this week, Newport writes:

My argument is that you can embrace the social internet without having to become a “gadget” inside the algorithmic attention economy machinations of the social media conglomerates. As noted previously, I think this is the right answer for those who are fed up with the dehumanizing aspects of social media, but are reluctant to give up altogether on the potential of the internet to bring people together.
He suggests several ways for this to happen:
  • Approach #1: The Slow Social Media Philosophy
  • Approach #2: Own Your Own Domain
This is, in effect, the IndieWeb approach. However, I still think that Newport and others who work in universities may a special case. As Austin Kleon notes, there's already built-in ways for your career to advance in academia. Others have to show their work...

What I don’t see being discussed is that as we collectively mature in our use of social media is that we’re likely to use different networks for different purposes. Facebook, LinkedIn, and the like try to force us into a single online identity. It’s OK to look and act differently when you’re around different people in different environments.

Source: Cal Newport (On Social Media and Its Discontents / Beyond #DeleteFacebook: More Thoughts on Embracing the Social Internet Over Social Media)

Social internet vs social media

It’s good to see Cal Newport, whose book Deep Work I found unexpectedly great last year, add a bit more nuance to his position on social media:

The young progressives grew up in a time when platform monopolies like Facebook were so dominant that they seemed inextricably intertwined into the fabric of the internet. To criticize social media, therefore, was to criticize the internet’s general ability to do useful things like connect people, spread information, and support activism and expression.

The older progressives, however, remember the internet before the platform monopolies. They were concerned to observe a small number of companies attempt to consolidate much of the internet into their for-profit, walled gardens.

To them, social media is not the internet. It was instead a force that was co-opting the internet — including the powerful capabilities listed above — in ways that would almost certainly lead to trouble.

Newport has started talking about the difference between ‘social media’ and the ‘social internet’:

The social internet describes the general ways in which the global communication network and open protocols known as “the internet” enable good things like connecting people, spreading information, and supporting expression and activism.

Social media, by contrast, describes the attempt to privatize these capabilities by large companies within the newly emerged algorithmic attention economy, a particularly virulent strain of the attention sector that leverages personal data and sophisticated algorithms to ruthlessly siphon users’ cognitive capital.

If you’d asked people in 2005, they would have said that there was no way that people would leave MySpace in favour of a different platform.

People like Facebook. But if you could offer them a similar alternative that stripped away the most unsavory elements of Zuckerberg’s empire (perhaps funded by a Wikipedia-style nonprofit collective, or a modest subscription fee), many would happily jump ship.
Indeed.

Following up with another this post this week, Newport writes:

My argument is that you can embrace the social internet without having to become a “gadget” inside the algorithmic attention economy machinations of the social media conglomerates. As noted previously, I think this is the right answer for those who are fed up with the dehumanizing aspects of social media, but are reluctant to give up altogether on the potential of the internet to bring people together.
He suggests several ways for this to happen:
  • Approach #1: The Slow Social Media Philosophy
  • Approach #2: Own Your Own Domain
This is, in effect, the IndieWeb approach. However, I still think that Newport and others who work in universities may a special case. As Austin Kleon notes, there's already built-in ways for your career to advance in academia. Others have to show their work...

What I don’t see being discussed is that as we collectively mature in our use of social media is that we’re likely to use different networks for different purposes. Facebook, LinkedIn, and the like try to force us into a single online identity. It’s OK to look and act differently when you’re around different people in different environments.

Source: Cal Newport (On Social Media and Its Discontents / Beyond #DeleteFacebook: More Thoughts on Embracing the Social Internet Over Social Media)

The '1, 2, 3' approach to organising your working day

I subscribe to the free version of Stowe Boyd’s Work Futures newsletter. He’s jumped around platforms a bit when I think he’d be better off charging a smaller amount for a larger audience on Patreon.

Boyd’s latest post talks about how he approaches his work, a subject I find endlessly fascinating.

I basically employ three styles of work journaling:
  1. On a daily basis, I plan and track my work with the ‘1, 2, 3′ technique.
  2. On a weekly basis, I plan and track using the ‘must, should, might’ technique.
  3. On ‘agenda’ projects, I plan and track using the ‘do, do, do’ technique. I use the term ‘agenda’ to distinguish with the short-range calendar orientation of daily and weekly projects. This will make more sense, later on.
Breaking down that '1, 2, 3' technique, he notes that (like me) he's realised there's only a certain amount you can sustainably get done in one day:
Specifically, I have learned that I can do the following:
  1. One major activity, such as working for a few hours on client research, or writing for a few hours. This is the ‘1′ in the ‘1, 2, 3′.
  2. Two medium sized activities, like a 45 minute phone call, or doing an hour-long webinar. This is the ‘2′ in the ‘1, 2, 3′.
  3. Three short activities, taking less than 45 minutes. This is the ‘3′ in the ‘1, 2, 3′.
I'm not sure how many hours per day Boyd works, but I bet it varies. What I like about this approach is that having a 'major activity' that you check off each day makes you feel like you've achieved something. A day full of short and medium-sized activities feels somewhat wasted.

Source: Work Futures

The '1, 2, 3' approach to organising your working day

I subscribe to the free version of Stowe Boyd’s Work Futures newsletter. He’s jumped around platforms a bit when I think he’d be better off charging a smaller amount for a larger audience on Patreon.

Boyd’s latest post talks about how he approaches his work, a subject I find endlessly fascinating.

I basically employ three styles of work journaling:
  1. On a daily basis, I plan and track my work with the ‘1, 2, 3′ technique.
  2. On a weekly basis, I plan and track using the ‘must, should, might’ technique.
  3. On ‘agenda’ projects, I plan and track using the ‘do, do, do’ technique. I use the term ‘agenda’ to distinguish with the short-range calendar orientation of daily and weekly projects. This will make more sense, later on.
Breaking down that '1, 2, 3' technique, he notes that (like me) he's realised there's only a certain amount you can sustainably get done in one day:
Specifically, I have learned that I can do the following:
  1. One major activity, such as working for a few hours on client research, or writing for a few hours. This is the ‘1′ in the ‘1, 2, 3′.
  2. Two medium sized activities, like a 45 minute phone call, or doing an hour-long webinar. This is the ‘2′ in the ‘1, 2, 3′.
  3. Three short activities, taking less than 45 minutes. This is the ‘3′ in the ‘1, 2, 3′.
I'm not sure how many hours per day Boyd works, but I bet it varies. What I like about this approach is that having a 'major activity' that you check off each day makes you feel like you've achieved something. A day full of short and medium-sized activities feels somewhat wasted.

Source: Work Futures

Truth

“If you tell the truth, you don’t have to remember anything.”

(Mark Twain)

Truth

“If you tell the truth, you don’t have to remember anything.”

(Mark Twain)

Blockcerts mobile

I still don’t really see the need for blockchain-based credentials (particularly given the tension between GDPR and immutability) but this is good to see:

Learning Machine is proud to introduce the new Blockcerts Wallet mobile app (iOS/Android) for people to easily receive, store, and share their official records. These might include electronic IDs, academic records, workforce training, or even civic records.

Blockcerts are compatible with the Open Badges specification. What I do like about Blockcerts is the idea of 'Self-Sovereign Identity' (which I actually think you can do without blockchain):

Blockcerts is the open standard for how to create, anchor, and verify records using any blockchain in a format that is recipient owned and that has no ongoing dependency upon any vendor or issuer. This is what we mean by Self-Sovereign Identity, the ability for people to control their own identity records without paying rent to central authorities for transmission or verification. Instead, people can receive their records once, then share them online or directly with third parties like employers whenever needed. Even if vendors or institutions cease to exist, people never lose the ability to use their official records and prove their identity.

Just as it makes sense for Facebook to try and get everyone to use it as their only social network, it totally makes sense for a startup like Learning Machine to be focusing on the Blockcerts Wallet being the single place for people to store their official records.

The Blockcerts Wallet is positioned to be a lifelong portfolio of official records, a personal repository from across disparate institutions in one convenient location. This means that individuals can become their own lifelong registrar of learning and achievement. So, it’s critical that the Wallet remain free and friendly to use, with plenty of accommodation for people who may lose or transition devices.

The good thing, of course, is that Blockcerts is an open standard. So anyone can build a wallet.

Source: Learning Machine blog

Blockcerts mobile

I still don’t really see the need for blockchain-based credentials (particularly given the tension between GDPR and immutability) but this is good to see:

Learning Machine is proud to introduce the new Blockcerts Wallet mobile app (iOS/Android) for people to easily receive, store, and share their official records. These might include electronic IDs, academic records, workforce training, or even civic records.

Blockcerts are compatible with the Open Badges specification. What I do like about Blockcerts is the idea of 'Self-Sovereign Identity' (which I actually think you can do without blockchain):

Blockcerts is the open standard for how to create, anchor, and verify records using any blockchain in a format that is recipient owned and that has no ongoing dependency upon any vendor or issuer. This is what we mean by Self-Sovereign Identity, the ability for people to control their own identity records without paying rent to central authorities for transmission or verification. Instead, people can receive their records once, then share them online or directly with third parties like employers whenever needed. Even if vendors or institutions cease to exist, people never lose the ability to use their official records and prove their identity.

Just as it makes sense for Facebook to try and get everyone to use it as their only social network, it totally makes sense for a startup like Learning Machine to be focusing on the Blockcerts Wallet being the single place for people to store their official records.

The Blockcerts Wallet is positioned to be a lifelong portfolio of official records, a personal repository from across disparate institutions in one convenient location. This means that individuals can become their own lifelong registrar of learning and achievement. So, it’s critical that the Wallet remain free and friendly to use, with plenty of accommodation for people who may lose or transition devices.

The good thing, of course, is that Blockcerts is an open standard. So anyone can build a wallet.

Source: Learning Machine blog

Automated Chinese jaywalking fines are a foretaste of so-called 'smart cities'

Given the choice of living in a so-called ‘smart city’ and living in rural isolation, I think I’d prefer the latter. This opinion has been strengthened by reading about what’s going on in China at the moment:

Last April, the industrial capital of Shenzhen installed anti-jaywalking cameras that use facial recognition to automatically identify people crossing without a green pedestrian light; jaywalkers are shamed on a public website and their photos are displayed on large screens at the intersection,

Nearly 14,000 people were identified by the system in its first ten months of its operation. Now, Intellifusion, who created the system, is planning to send warnings by WeChat and Sina Weibo messages; repeat offenders will get their social credit scores docked.

Yes, that’s right: social credit. Much more insidious than a fine, having a low social credit rating means that you can’t travel.

Certainly something to think about when you hear people talking about ‘smart cities of the future’.

Source: BoingBoing

(related: 99% Invisible podcast on the invention of ‘jaywalking’)

Automated Chinese jaywalking fines are a foretaste of so-called 'smart cities'

Given the choice of living in a so-called ‘smart city’ and living in rural isolation, I think I’d prefer the latter. This opinion has been strengthened by reading about what’s going on in China at the moment:

Last April, the industrial capital of Shenzhen installed anti-jaywalking cameras that use facial recognition to automatically identify people crossing without a green pedestrian light; jaywalkers are shamed on a public website and their photos are displayed on large screens at the intersection,

Nearly 14,000 people were identified by the system in its first ten months of its operation. Now, Intellifusion, who created the system, is planning to send warnings by WeChat and Sina Weibo messages; repeat offenders will get their social credit scores docked.

Yes, that’s right: social credit. Much more insidious than a fine, having a low social credit rating means that you can’t travel.

Certainly something to think about when you hear people talking about ‘smart cities of the future’.

Source: BoingBoing

(related: 99% Invisible podcast on the invention of ‘jaywalking’)

What's the link between employment and creativity?

These days, we tend to think of artists as working on their art full-time. After all, it’s their passion and vocation. That’s not always the case, as this article points out:

The avant-garde composer Philip Glass shocked at least one music lover when he materialized, smock-clad and brandishing plumber’s tools, in a home with a malfunctioning appliance. “While working,” Glass recounted to The Guardian in 2001, “I suddenly heard a noise and looked up to find Robert Hughes, the art critic of Time magazine, staring at me in disbelief. ‘But you’re Philip Glass! What are you doing here?’ It was obvious that I was installing his dishwasher and I told him that I would soon be finished. ‘But you are an artist,’ he protested. I explained that I was an artist but that I was sometimes a plumber as well and that he should go away and let me finish.”
Art and employment aren't necessarily separate spheres, but can influence one another:

But then there is another category of artists-with-jobs: people whose two professions play off each other in unexpected ways. For these creators, a trade isn’t just about paying the bills; it’s something that grounds them in reality. In 2017, a day job might perform the same replenishing ministries as sleep or a long run: relieving creative angst, restoring the artist to her body and to the texture of immediate experience. But this break is also fieldwork. For those who want to mine daily life for their art, a second job becomes an umbilical cord fastened to something vast and breathing. The alternate gig that lifts you out of your process also supplies fodder for when that process resumes. Lost time is regained as range and perspective, the artist acquiring yet one more mode of inhabiting the world.

It's all very well being in your garret creating art, but what about your self-development and responsibility to society?

Some cultivate their art because it sustains their work, or because it fulfills a sense of civic responsibility. Writing children’s literature “has helped me grow in confidence as a person, which in turn has helped me develop … as an officer, too,” said Gavin Puckett, a U.K.-based policeman (it remains his primary income source) and author of the prizewinning 2013 “Fables From the Stables” series. Puckett, who joined the service in 1998, sketched the rhyming adventure “Murray the Horse” after passing a horse in a field while listening to a radio announcer report on “sports and activities you can only complete backwards” — he imagined a story about a horse that runs in reverse. He admits that telling stories still makes him feel as though he’s “stepping out of character.” “My role as a police officer came first,” he told me.

Perhaps it's because I'm recently employed, or don't really see myself as an 'artist', but I like the final section of this article
The trope of the secluded creator has echoes of imprisonment and stasis. (After all, who wants to spend all their time in one room, even if it belongs to them?) Sometimes the artist needs to turn off, to get out in the fray, to stop worrying over when her imagination’s pot will boil — because, of course, it won’t if she’s watching. And regardless of whether the reboot results in brilliance down the line, that lunchtime stroll isn’t going to take itself, those stray thoughts won’t think themselves, the characters on the corner certainly won’t gawk at themselves. Artists: They’re just like us, unless they can afford not to be, in which case they still are, but doing a better job of concealing it.
Source: The New York Times Style Magazine

Mozilla's Web Literacy Curriculum

I’m not sure what to say about this announcement from Mozilla about their ‘new’ Web Literacy Curriculum. I led this work from 2012 to 2015 at the Mozilla Foundation, but it doesn’t seem to be any further forward now than when I left.

In fact, it seems to have just been re-focused for the libraries sector:

With support from Institute of Museum and Library Services, and a host of collaborators including key public library leaders from around the country, this open-source, participatory, and hands-on curriculum was designed to help the everyday person in a library setting, formal and informal education settings, community center, or at your kitchen table.

The site for the Web Literacy Curriculum features resources that will already be familiar to those who follow Mozilla's work.

Four years ago, I wrote a post on the Mozilla Learning blog about Atul Varma’s WebLitMapper, Laura Hilliger’s Web Literacy Learning Pathways, as well as the draft alignment guidelines I’d drawn up. Where has the innovation gone since that point?

It’s sad to see such a small, undeveloped resource from an organisation that once showed such potential in teaching the world the Web.

Source: Read, Write, Participate

Mozilla's Web Literacy Curriculum

I’m not sure what to say about this announcement from Mozilla about their ‘new’ Web Literacy Curriculum. I led this work from 2012 to 2015 at the Mozilla Foundation, but it doesn’t seem to be any further forward now than when I left.

In fact, it seems to have just been re-focused for the libraries sector:

With support from Institute of Museum and Library Services, and a host of collaborators including key public library leaders from around the country, this open-source, participatory, and hands-on curriculum was designed to help the everyday person in a library setting, formal and informal education settings, community center, or at your kitchen table.

The site for the Web Literacy Curriculum features resources that will already be familiar to those who follow Mozilla's work.

Four years ago, I wrote a post on the Mozilla Learning blog about Atul Varma’s WebLitMapper, Laura Hilliger’s Web Literacy Learning Pathways, as well as the draft alignment guidelines I’d drawn up. Where has the innovation gone since that point?

It’s sad to see such a small, undeveloped resource from an organisation that once showed such potential in teaching the world the Web.

Source: Read, Write, Participate

Issue #298: Easter treats

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Albert Camus quotation

I’ve long admired the “invincible summer” quotation from Camus. The longer version, however is much better.

After I couldn’t find anywhere to buy a version that met my requirements (simple aesthetic longer quotation) I decided to order a custom wall decal.

This evening, I put it up on the wall above the monitor on my standing desk. If you’re wondering where the author attribution is, well… I didn’t get that bit quite right, so it’s in the bin!

xkcd on conversational dynamics

xkcd cartoon

Source: xkcd

Not everyone is going to like you

One of my favourite parts of Marcus Aurelius' Meditations is this one:

Begin each day by telling yourself: Today I shall be meeting with interference, ingratitude, insolence, disloyalty, ill-will, and selfishness – all of them due to the offenders’ ignorance of what is good or evil. But for my part I have long perceived the nature of good and its nobility, the nature of evil and its meanness, and also the nature of the culprit himself, who is my brother (not in the physical sense, but as a fellow creature similarly endowed with reason and a share of the divine); therefore none of those things can injure me, for nobody can implicate me in what is degrading. Neither can I be angry with my brother or fall foul of him; for he and I were born to work together, like a man’s two hands, feet or eyelids, or the upper and lower rows of his teeth. To obstruct each other is against Nature’s law – and what is irritation or aversion but a form of obstruction.
In other words, you're going to deal with people you don't like, and people who don't like you.

This article from Lifehacker is along the same lines:

Remember that it is impossible to please everyone,” Chloe Brotheridge, a hypnotherapist and anxiety expert, tells us. “You have your own unique personality which means some people will love and adore you, while others may not.” Of course, while this concept is easy to understand on its face, it’s difficult to keep your perspective in check when you find you’re, say, left out of invitations to happy hours with co-workers, or getting noncommittal responses from potential new friends, or you overhear your roommates bad-mouthing you. Rejection is painful in any form, whether it be social or romantic, and it’s a big ego blow to get bumped from the inner circle.
I had a good friend of mine cut me off a few years ago. This was a guy who my kids called 'uncle', without him actually being a family member. But hey, no hard feelings:
So, it’s not really that it’s not you but them, so much as it’s both you and them. “This person, this situation, where they are in their life, it’s not compatible to where you are,” Jennifer Verdolin, an animal behavior expert and adjunct professor at Duke University, tells us. “We have preferences in terms of personality, and that’s not to say that your personality is bad. It’s different from mine, and I prefer to hang around people who are similar to me.”
There's incompatibility, different life stages, and there's just being a dick:
While you shouldn’t always blame yourself if someone doesn’t like you, if you’re finding this is a pattern, you may want to take an unbiased look at your own behavior. “When I put people in a [therapy] group, I get to see immediately what problems or tics or bad social habits they have,” Grover says. He recalls a successful, handsome male patient of his who was having trouble holding onto romantic relationships. Though they were unable to solve the problem together in individual therapy, Grover managed to convince the patient to join a group. “Within five minutes, I was horrified,” Grover says. “He gets very anxious in front of people, and to camouflage his anxiety he becomes overly confident, which comes across as arrogant. The women in the group commented that he was becoming less popular the more they got to know him.”
You can't please all of the people all of the time, but you can introspect and know yourself. Then you're in a stronger position to say what (and who) you like, and for what reasons.

Final thought? It’s worth being nice to people as you never know when they’re going to be in a position to do you a favour. It doesn’t, however, mean you have to hang out with them all of the time.

Source: Lifehacker

No-one wants a single identity, online or offline

It makes sense for companies reliant on advertising to not only get as much data as they can about you, but to make sure that you have a single identity on their platform to which to associate it.

This article by Cory Doctorow in BoingBoing reports on some research around young people and social media. As Doctorow states:

Social media has always had a real-names problem. Social media companies want their users to use their real names because it makes it easier to advertise to them. Users want to be able to show different facets of their identities to different people, because only a sociopath interacts with their boss, their kids, and their spouse in the same way.
I was talking to one of my Moodle colleagues about how, in our mid-thirties, we're a 'bridging' generation between those who only went online in adulthood, and those who have only ever known a world with the internet. I got online for the first time when I was about fourteen or fifteen.

Those younger than me are well aware of the perils and pitfalls of a single online identity:

Amy Lancaster from the Journalism and Digital Communications school at the University of Central Lancashire studies the way that young people resent "the way Facebook ties them into a fixed self...[linking] different areas of a person’s life, carrying over from school to university to work."
I think Doctorow has made an error around Amy's surname, which is given as 'Binns' instead of 'Lancaster' both in the journal article and the original post.

Binns writes:

Young people know their future employers, parents and grandparents are present online, and so they behave accordingly. And it’s not only older people that affect behaviour.

My research shows young people dislike the way Facebook ties them into a fixed self. Facebook insists on real names and links different areas of a person’s life, carrying over from school to university to work. This arguably restricts the freedom to explore new identities – one of the key benefits of the web.

The desire for escapable transience over damning permanence has driven Snapchat’s success, precisely because it’s a messaging app that allows users to capture videos and pictures that are quickly removed from the service.

This is important for the work I’m leading around Project MoodleNet. It’s not just teenagers who want “escapable transience over damning permanence”.

Source: BoingBoing

Contentment

“Fortify yourself with contentment, for this is an impregnable fortress.” (Epictetus)

The spectrum of work autonomy

Some companies have (and advertise as a huge perk) their ‘unlimited vacation’ policy. That, of course, sounds amazing. Except, of course, that there’s a reason why companies are so benevolent.

I can think of at least two:

  1. Your peers will exert downward pressure on the number of holidays you actually take.
  2. If there's no set holiday entitlement, when you leave the company doesn't have to pay for unused holiday days.
This article by Gaby Hinsliff in The Guardian uses the unlimited vacation policy as an example of the difference between two ends of the spectrum when it comes to jobs.
And that, increasingly, is the dividing line in modern workplaces: trust versus the lack of it; autonomy versus micro-management; being treated like a human being or programmed like a machine. Human jobs give the people who do them chances to exercise their own judgment, even if it’s only deciding what radio station to have on in the background, or set their own pace. Machine jobs offer at best a petty, box-ticking mentality with no scope for individual discretion, and at worst the ever-present threat of being tracked, timed and stalked by technology – a practice reaching its nadir among gig economy platforms controlling a resentful army of supposedly self-employed workers.
Never mind robots coming to steal our jobs, that's just a symptom in a wider trend of neoliberal, late-stage capitalism:
There have always been crummy jobs, and badly paid ones. Not everyone gets to follow their dream or discover a vocation – and for some people, work will only ever be a means of paying the rent. But the saving grace of crummy jobs was often that there was at least some leeway for goofing around; for taking a fag break, gossiping with your equally bored workmates, or chatting a bit longer than necessary to lonely customers.
The 'contract' with employers these days goes way beyond the piece of paper you sign that states such mundanities as how much you will be paid or how much holiday you get. It's about trust, as Hinsliff comments:
The mark of human jobs is an increasing understanding that you don’t have to know where your employees are and what they’re doing every second of the day to ensure they do it; that people can be just as productive, say, working from home, or switching their hours around so that they are working in the evening. Machine jobs offer all the insecurity of working for yourself without any of the freedom.
Embedded in this are huge diversity issues. I purposely chose a photo of a young white guy to go with the post, as they're disproportionately likely to do well from this 'trust-based' workplace approach. People of colour, women, and those with disabilities are more likely to suffer from implicit bias and other forms of discrimination.
The debate about whether robots will soon be coming for everyone’s jobs is real. But it shouldn’t blind us to the risk right under our noses: not so much of people being automated out of jobs, as automated while still in them.
I consume a lot of what I post to Thought Shrapnel online, but I originally red this one in the dead-tree version of The Guardian. Interestingly, in the same issue there was a letter from a doctor by the name of Jonathan Shapiro, who wrote that he divides his colleagues into three different types:
  1. Passionate
  2. Dispassionate
  3. Compassionate
The first group suffer burnout, he said. The second group survive but are "lousy". It's the third group that cope, as they "care for patients without sacrificing themselves on the altar of professional vocation".

What we need to be focusing on in education is preparing young people to be compassionate human beings, not cogs in the capitalist machine.

Source: The Guardian

Ignorance and dogmatism

“The greater the ignorance the greater the dogmatism.” (Sir William Osler)

Every part of your digital life is being tracked, packaged up, and sold

I’ve just installed Lumen Privacy Monitor on my Android smartphone after reading this blog post from Mozilla:

New research co-authored by Mozilla Fellow Rishab Nithyanand explores just this: The opaque realm of third-party trackers and what they know about us. The research is titled “Apps, Trackers, Privacy, and Regulators: A Global Study of the Mobile Tracking Ecosystem,” and is authored by researchers at Stony Brook University, Data & Society, IMDEA Networks, ICSI, Princeton University, Corelight, and the University of Massachusetts Amherst.

[...]

In all, the team identified 2,121 trackers — 233 of which were previously unknown to popular advertising and tracking blacklists. These trackers collected personal data like Android IDs, phone numbers, device fingerprints, and MAC addresses.

The link to the full report is linked to in the quotation above, but the high-level findings were:

»Most trackers are owned by just a few parent organizations. The authors report that sixteen of the 20 most pervasive trackers are owned by Alphabet. Other parent organizations include Facebook and Verizon. “There is a clear oligopoly happening in the ecosystem,” Nithyanand says.

» Mobile games and educational apps are the two categories with the highest number of trackers. Users of news and entertainment apps are also exposed to a wide range of trackers. In a separate paper co-authored by Vallina-Rodriguez, he explores the intersection of mobile tracking and apps for youngsters: “Is Our Children’s Apps Learning?

» Cross-device tracking is widespread. The vast majority of mobile trackers are also active on the desktop web, allowing companies to link together personal data produced in both ecosystems. “Cross-platform tracking is already happening everywhere,” Nithyanand says. “Fifteen of the top 20 organizations active in the mobile advertising space also have a presence in the web advertising space.”

We're finally getting the stage where a large portion of the population can't really ignore the fact that they're using free services in return for pervasive and always-on surveillance.

Source: Mozilla: Read, Write, Participate

Survival in the age of surveillance

The Guardian has a list of 18 tips to ‘survive’ (i.e. be safe) in an age where everyone wants to know everything about you — so that they can package up your data and sell it to the highest bidder.

On the internet, the adage goes, nobody knows you’re a dog. That joke is only 15 years old, but seems as if it is from an entirely different era. Once upon a time the internet was associated with anonymity; today it is synonymous with surveillance. Not only do modern technology companies know full well you’re not a dog (not even an extremely precocious poodle), they know whether you own a dog and what sort of dog it is. And, based on your preferred category of canine, they can go a long way to inferring – and influencing – your political views.
Mozilla has pointed out in a recent blog post that the containers feature in Firefox can increase your privacy and prevent 'leakage' between tabs as you navigate the web. But there's more to privacy and security than just that.

Here’s the Guardian’s list:

  1. Download all the information Google has on you.
  2. Try not to let your smart toaster take down the internet.
  3. Ensure your AirDrop settings are dick-pic-proof.
  4. Secure your old Yahoo account.
  5. 1234 is not an acceptable password.
  6. Check if you have been pwned.
  7. Be aware of personalised pricing.
  8. Say hi to the NSA guy spying on you via your webcam.
  9. Turn off notifications for anything that’s not another person speaking directly to you.
  10. Never put your kids on the public internet.
  11. Leave your phone in your pocket or face down on the table when you’re with friends.
  12. Sometimes it’s worth just wiping everything and starting over.
  13. An Echo is fine, but don’t put a camera in your bedroom.
  14. Have as many social-media-free days in the week as you have alcohol-free days.
  15. Retrain your brain to focus.
  16. Don’t let the algorithms pick what you do.
  17. Do what you want with your data, but guard your friends’ info with your life.
  18. Finally, remember your privacy is worth protecting.
A bit of a random list in places, but useful all the same.

Source: The Guardian

How to get hired

A great short post from Seth Godin, who explains how things work in the real world when you’re looking for a job or your next gig:

You meet someone. You do a small project. You write an article. It leads to another meeting. You do a slightly bigger project for someone else. You make a short film. That leads to a speaking gig. Which leads to an consulting contract. And then you get the gig.
These 'hops' as he calls them are important as they affect the mindset we should adopt:
If you're walking around with a quid pro quo mindset, giving only enough to get what you need right now, and walking away from anyone or anything that isn't the destination—not only are you eliminating all the possible multi-hop options, you're probably not having as much as fun or contributing as much as you could either.
Amen to that.

Source: Seth Godin

Alternatives to all of Facebook's main features

Over on a microcast at Patreon (subscribers only, I’m afraid) I referenced an email conversation I’ve been having about getting people off Facebook.

WIRED has a handy list of apps that replicate the functionality of the platform. It’s important to bear in mind that no other platform has the same feature set as Facebook. Of course it doesn’t, because no other platform has the dollars and support of the military-industrial complex quite like Facebook.

Nevertheless, here’s what WIRED suggests:

(Note: I haven't included 'birthday reminders' as that would have involved linking to a Facebook help page, and I don't link to Facebook. Full stop.)

I’ve used, and like, all of the apps on that list, with the exception of Paperless Post, which looks like it’s iOS-only.

OK, so it’s not easy getting people off a site that provides so much functionality, but it’s certainly possible. Lead by example, people.

Source: WIRED

Issue #297: Springing forward

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

The only privacy policy that matters is your own

Dave Pell writes NextDraft, a daily newsletter that’s one of the most popular on the web. I used to subscribe, and it’s undeniably brilliant, but a little US-centric for my liking.

My newsletter, Thought Shrapnel, doesn’t track you. In fact, I have to keep battling MailChimp (the platform I use to send it out) as it thinks I’ve made a mistake. Tracking is so pervasive but I have no need to know exactly how many people clicked on a particular link. It’s an inexact science, anyway.

Pell has written a great post about online privacy:

The story of Cambridge Analytica accessing your personal data on Facebook, supposedly creating a spot-on psychographic profile, and then weaponizing your own personality against you with a series of well-worded messages is now sweeping the media. And it will get louder. And it will pass. And then, I promise, there will be another story about your data being stolen, borrowed, hacked, misused, shared, bought, sold and on and on.

He points out the disconnect between rich people such as Mark Zuckerberg, CEO of Facebook, going to "great lengths" to protect his privacy, whilst simultaneously depriving Facebook users of theirs.

They are right to want privacy. They are right to want to keep their personal lives walled off from anyone from nosy neighbors to potential thieves to, well, Matt Richtel. They should lock their doors and lock down their information. They are right not to want you to know where they live, with whom they live, or how much they spend. They’re right to want to plug a cork in the social media champagne bottle we’ve shaken up in our blind celebration of glass houses.

They are right not to want to toss the floor planks that represent their last hint of personal privacy into the social media wood chipper. They are right in their unwillingness to give in to the seeming inevitability of the internet sharing machine. Do you really think it’s a coincidence that most of the buttons you press on the web are labeled with the word submit?

A Non-Disclosure Agreement (NDA) is something that's been in the news recently as Donald Trump has taken his shady business practices to the whitehouse. Pell notes that the principle behind NDAs is nevertheless sound: you don't get to divulge my personal details without my permission.

So you should follow their lead. Don’t do what they say. Do what they do. Better yet, do what they NDA.

[...]

There’s a pretty simple rule: never share anything on any site anywhere on the internet regardless of any privacy settings unless you are willing to accept that the data might one day be public.

The only privacy policy that matters is your own.

Source: Dave Pell

Co-operation

“Great discoveries and improvements invariably involve the cooperation of many minds.” (Alexander Graham Bell)

Support Thought Shrapnel on Patreon

For almost a year, I’ve been building up supporters for Thought Shrapnel through a semi-automated workflow that involved Gumroad. I still think that’s an excellent platform but, this week, I emailed the ~50 current supporters of Thought Shrapnel to let them know I’ll be transitioning to a Patreon page I’ve set up.

The most economically powerful thing you can do is to buy something for your own enjoyment that also improves the world. This has always been the value proposition of journalism and art. It’s a nonexclusive good that’s best enjoyed nonexclusively. (kottke.org)
If you value Thought Shrapnel, then please do consider backing it on Patreon. You can do so from as little as $1 per month. The first goal I've identified is to reach 100 supporters, as it really encourages me to keep on going with this endeavour!

As part of the transition, I’ll be moving Microcasts over to Patreon too. That’s for three reasons:

  1. They didn't quite fit in with being part of the feed here on the Thought Shrapnel blog.
  2. I find that having them as fully public means I self-censor a bit, something I don't have to do when I know I'm talking to people who better understand my context.
  3. Supporters on Patreon can get access to a private RSS feed they can add to their favourite podcast client.
The final bonus of the move is that it's more likely to lead to interactions with the community around Thought Shrapnel. I'm already enjoying interacting with those who I support on Patreon, and look forward to doing similarly with you!

Become a Patron!

Thanks in advance 👍

OERu has a social network

I saw (via OLDaily) that OERu is now using Mastodon to form a social network. This might work, it might not, but I’m flagging it as it’s the approach that I’ve moved away from for creating Project MoodleNet.

The OERu uses Mastodon, an open source social network with similar features to Twitter.

We encourage OERu learners to use this social network as part of your personal learning environment (PLE) to interact with your personal learning network (PLN). Many of our courses incorporate activities using Mastodon and this technology is a great way to stay connected with your learning community. The OERu hosted version is located at mastodon.oeru.org.

I was initially convinced that this was the right approach to building what Martin Dougiamas has described as “a new open social media platform for educators, focused on professional development and open content”. I got deeply involved in the ActivityPub protocol and geeked-out on how ‘decentralised’ it all would be.

However, I’ve changed my mind. Instead of dropping people into another social network (on top of their accounts on Facebook, Twitter, Instagram, etc.) we’re going to build it around something which will be immediately useful: resource curation. More soon, and follow the Project MoodleNet blog for updates!

Oh, and if you need a short, visual Mastodon explainer, check out this new video.

Source: OERu

Anxiety

“Anxiety is the dizziness of freedom.” (Søren Kierkegaard)

Moral needs and user needs

That products should be ‘user-focused’ goes without queustion these days. At least by everyone apart from Cassie Robinson, who writes:

This has been sitting uncomfortably with me for a while now. In part that’s because when anything becomes a bit of a dogma I question it, but it’s also because I couldn’t quite marry the mantra to my own personal experiences.

Sometimes, there's more than user stories and 'jobs to be done':
For example, if we are designing the new digital justice system using success measures based on how efficiently the user can complete the thing they are trying to do rather than on whether they actually receive justice, what’s at risk there? And if we prioritise that over time, are we in some way eroding the collective awareness of what “good” justice as an outcome looks like?
She makes a good point. Robinson suggests that we consider 'moral needs' as well as 'user needs':

Designing and iterating services based on current user needs and behaviours means that they are never being designed for who isn’t there. Whose voice isn’t in the data? And how will the new institutions that are needed be created unless we focus more on collective agency and collective needs?

As I continue my thinking around Project MoodleNet this is definitely something to bear in mind.

Source: Cassie Robinson

On struggle

The popular view of life seems to be that mishaps, hardship, and struggle are all things that most people can avoid. If we stop to think about that for a second, that’s obviously untrue; in fact, the opposite is the case.

This article in Lifehacker quotes Seneca, one of my favourite Stoic philosophers:

“Why, then, should we be angry? Why should we lament? We are prepared for our fate: let nature deal as she will with her own bodies; let us be cheerful whatever befalls, and stoutly reflect that it is not anything of our own that perishes. What is the duty of a good man? To submit himself to fate: it is a great consolation. To be swept away together with the entire universe: whatever law is laid upon us that thus we must live and thus we must die, is laid upon the gods.”

As part of my daily reading, I meditate on other tenets of Stoicism. The opening to Epictetus' Enchiridion tells you pretty much everything you need to know:
Of things some are in our power, and others are not. In our power are opinion, movement toward a thing, desire, aversion (turning from a thing); and in a word, whatever are our own acts: not in our power are the body, property, reputation, offices (magisterial power), and in a word, whatever are not our own acts. And the things in our power are by nature free, not subject to restraint nor hindrance: but the things not in our power are weak, slavish, subject to restraint, in the power of others. Remember then that if you think the things which are by nature slavish to be free, and the things which are in the power of others to be your own, you will be hindered, you will lament, you will be disturbed, you will blame both gods and men: but if you think that only which is your own to be your own, and if you think that what is another’s, as it really is, belongs to another, no man will ever compel you, no man will hinder you, you will never blame any man, you will accuse no man, you will do nothing involuntarily (against your will), no man will harm you, you will have no enemy, for you will not suffer any harm.
It's also worth dwelling on this from Marcus Aurelius' Meditations:
Begin each day by telling yourself: Today I shall be meeting with interference, ingratitude, insolence, disloyalty, ill-will, and selfishness all of them due to the offenders ignorance of what is good or evil.
Suffering is part of life, and we should embrace it. Control what you can control, and let the rest go.

Source: Lifehacker

Going deep

I don’t think the right term for this is ‘mobile blindness’ but Seth Godin’s analogy is nevertheless instructive.

He talks about the shift over the last 20 years or so in getting our news and information on primarily via books and newspapers, to getting it via desktop computers, and now predominantly through our mobile devices. Things become bite-sized, and our attention field is wide by shallow.

Photokeratitis (snow blindness) happens when there's too much ultraviolet--when the fuel for our eyes comes in too strong and we can't absorb it all. Something similar is happening to each of us, to our entire culture, as a result of the tsunami of noise vying for our attention.

It's possible you can find an edge by going even faster and focusing even more on breadth at the surface. But it's far more satisfying and highly leveraged to go the other way instead. Even if it's just for a few hours a day.

If you care about something, consider taking a moment to slow down and understand it. And if you don't care, no need to even bother with the surface.

This isn't a technology issue, it's an attention issue. Yes, it's possible to argue that these devices are designed to capture your attention. But we all still have a choice.

You can safely ignore what doesn’t align with your goals in life. First, of course, you have to have some goals…

Source: Seth Godin

Derek Sivers has quit Facebook (hint: you should, too)

I have huge respect for Derek Sivers, and really enjoyed his book Anything You WantHis book reviews are also worth trawling through.

In this post, which made its way to the Hacker News front page, Sivers talks about his relationship with Facebook, and why he’s finally decided to quit the platform:

When people would do their “DELETE FACEBOOK!” campaigns, I didn’t bother because I wasn’t using it anyway. It was causing me no harm. I think it’s net-negative for the world, and causing many people harm, but not me, so why bother deleting it?

But today I had a new thought:

Maybe the fact that I use it to share my blog posts is a tiny tiny reason why others are still using it. It’s like I’m still visiting friends in the smoking area, even though I don’t smoke. Maybe if I quit going entirely, it will help my friends quit, too.

Last year, I wrote a post entitled Friends don’t let friends use Facebook. The problem is, it’s difficult. Despite efforts to suggest alternatives, most of the clubs our children are part of (for activities such as swimming and karate) use Facebook. I don’t have an account, but my wife has to if we’re to keep up-to-date. It’s a vicious circle.

Like Sivers, I’ve considered just being on Facebook to promote my blog posts. But I don’t want to be part of the problem:

I had a selfish business reason to keep it. I’m going to be publishing three different books over the next year, and plan to launch a new business, too. But I’m willing to take that small loss in promotion, because it’s the right thing to do. It always feels good to get rid of things I’m not using.
So if you've got a Facebook account and reading the Cambridge Analytica revelations concerns you, then try to wean yourself of Facebook. It's literally for the good of democracy.

Ultimately, as Sivers notes, Facebook will go away because of the adoption lifecycle of platforms and products. It’s difficult to think of that, but I’ll leave the last word to the late, great Ursula Le Guin:

We live in capitalism, its power seems inescapable - but then, so did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art. Very often in our art, the art of words.
Source: Sivers.org

Superficial and imperfect knowledge

“To know things well, we must know the details, and as they are almost infinite, our knowledge is always superficial and imperfect.”

(François de La Rochefoucauld)

Bridging technologies

When you go deep enough into philosophy or religion one of the key insights is that everything is temporary. Success is temporary. Suffering is temporary. Your time on earth is temporary.

One way of thinking about this one a day-to-day basis is that everything is a bridge to something else. So that technology that I’ve been excited about since 2011? Yep, it’s a bridge (or perhaps a raft) to get to something else.

Benedict Evans, who works for the VC firm Andreessen Horowitz, sends out a great, short newsletter every week to around 95,000 people. I’m one of them. In this week’s missive, he linked to a blog post he wrote about bridging technologies.

A bridge product says 'of course x is the right way to do this, but the technology or market environment to deliver x is not available yet, or is too expensive, and so here is something that gives some of the same benefits but works now.'
As with anything, there are good and bad bridging technologies. At the time, it can be hard to spot the difference:

In hindsight, though, not just WAP but the entire feature-phone mobile internet prior to 2007, including i-mode, with cut-down pages and cut-down browsers and nav keys to scroll from link to link, was a bridge. The 'right' way was a real computer with a real operating system and the real internet. But we couldn't build phones that could do that in 1999, even in Japan, and i-mode worked really well in Japan for a decade.

It's all obvious in retrospect, as with the example of Firefox OS, which was developed at the same time I was at Mozilla:
[T]he problem with the Firefox phone project was that even if you liked the experience proposition - 'almost as good as Android but works on much cheaper phones' - the window of time before low-end Android phones closed the price gap was too short.
Usually, cheap things add more features until people just 'make do' with 80-90% of the full feature set. However, that's not always the case:
Sometimes the ‘right’ way to do it just doesn’t exist yet, but often it does exist but is very expensive. So, the question is whether the ‘cheap, bad’ solution gets better faster than the ‘expensive, good’ solution gets cheap. In the broader tech industry (as described in the ‘disruption’ concept), generally the cheap product gets good. The way that the PC grew and killed specialized professional hardware vendors like Sun and SGi is a good example. However, in mobile it has tended to be the other way around - the expensive good product gets cheaper faster than the cheap bad product can get good.
Evans goes on to talk about autonomous vehicles, something that he's heavily invested in (financially and intellectually) with his VC firm.

In the world of open source, however, it’s a slightly different process. Instead of thinking about the ‘runway’ of capital that you’ve got before you have to give up and go home, it’s about deciding when it no longer makes sense to maintain the project you’re working on. In some cases, the answer to that is ‘never’ which means that the project keeps going and going and going.

It can be good to have a forcing function to focus people’s minds. I’m thinking, for example, of Steve Jobs declaring war on Flash. The reasons he gives are disingenuous (accusing Adobe of not being ‘open’!) but the upshot of Apple declaring Flash as dead to them caused the entire industry to turn upside down. In effect, Flash was a ‘bridge’ to the full web on mobile devices.

Using the idea of technology ‘bridges’ in my own work can lead to some interesting conclusions. For example, the Project MoodleNet work that I’m beginning will ultimately be a bridge to something else for Moodle. Thinking about my own career, each step has been a bridge to something else; the most interesting bridges have been those where I haven’t been quite sure what was one the other side. Or, indeed, if there even was an other side…

Source: Benedict Evans

How to choose an open license for your project

I’m so used to working openly by default that I sometimes forget that for many (most?) people it’s a new, and sometimes quite scary, step.

Alfonso Sánchez Uzábal pointed to choosealicense.com from GitHub, which makes it simple to choose an open license for your software project. Moodle, for example, is GPL but it gives other examples such as MIT and Apache.

For everything other than software, you’re probably best off with Creative Commons licenses. I’ve been using these for the last fifteen years now on my work and highly recommend them.

Mystery of life

“The mystery of life isn’t a problem to solve, but a reality to experience.” (Frank Herbert)

Decision fatigue and parenting

Our 11 year-old still asks plenty of questions, but also looks things up for himself online. Our seven year-old fires off barrages of questions when she wakes up, to and from school, during dinner, before bed — basically anytime she can get a word in edgeways.

I have sympathy, therefore, for Emma Marris, who decided to show those who aren’t parents of young children (or perhaps those who have forgotten) what it’s like.

I decided to write down every question that required a decision that my my two kids asked me during a single day. This doesn’t include simple requests for information like “how do you spell ‘secret club’?” or “what is the oldest animal in the world?” or the perennial favorite, “why do people have to die?” Recording ALL the questions two kids ask in a day would be completely intractable. So, limiting myself to just those queries that required a decision, here are the results.
Some of my favourites from her long list:
  • Can I play on your phone until you wake up?
  • Can we listen to bouncy music instead of this podcast about the Mueller investigation while we make breakfast?
  • Will you pre-chew my gumball since it is too large to fit in my mouth?
  • Will you tell us who you are texting?
  • Do you want to eat the meat out the tail of this shrimp?
Marris says in the comments that her kids are eight and five years old, respectively. You can kind of tell that from the questions.

I’m not saying we’re amazing parents, but one thing we try and do is to not just tell our children the answer to their questions, but tell them how we worked it out. That’s particularly important if we used some kind of device to help us find the answer. Recently, I’ve been using the Google Assistant which feels to an adult almost interface-free. However, there’s a definite knack to it that you forget once you’re used to using it.

Over and above that, a lot of questions that children ask are permission and equality-related. In other words, they’re asking if they’re allowed to do something, or if you’ll intervene because the other child is doing something they shouldn’t / gaining an advantage. Both my wife and I have been teachers, and the same is true in the classroom.

There’s a couple of things I’ve learned here:

  1. If children are asking a lot of permission-related questions, then it's worth your while to help them understand what's allowed and what's not allowed. Allow them to help themselves more than they do currently.
  2. If children are complaining about equality and they're different ages, explain to both of them that you treat them equitably but not eqully. When they complain that's not fair, send the older one to bed at the same time as the younger one (and perhaps give them the same amount of pocket money), and get the younger one to help more around the house. They don't stop complaining, but they certainly do it less...
Why is all of this important? Making decisions makes you tired. To quote Marris' first paragraph as the last one in this one:
Decision fatigue is real. Decision fatigue is the mental exhaustion and reduced willpower that comes from making many, many micro-calls every day. My modern American lifestyle, with its endless variety of choices, from a hundred kinds of yogurt at the grocery store to the more than 4,000 movies available on Netflix, breeds decision fatigue. But it is my kids that really fry my brain. At last I understand that my own mother’s penchant for saying “ask your father” wasn’t deference to her then husband but the most desperate sort of buck-passing–especially since my father dealt with decision fatigue by saying yes to pretty much everything, which is how my brothers and I ended up taking turns rolling down the steep hill we grew up on inside an aluminum trash can.
Source: The Last Word on Nothing

Tech will eat itself

Mike Murphy has been travelling to tech conferences: CES, MWC, and SXSW. He hasn’t been overly-impressed by what he’s seen:

The role of technology should be to improve the quality of our lives in some meaningful way, or at least change our behavior. In years past, these conferences have seen the launch of technologies that have indeed impacted our lives to varying degrees, from the launch of Twitter to car stereos and video games.
However, it's all been a little underwhelming:
People always ask me what trends I see at these events. There are the usual words I can throw out—VR, AR, blockchain, AI, big data, autonomy, automation, voice assistants, 3D-printing, drones—the list is endless, and invariably someone will write some piece on each of these at every event. But it’s rare to see something truly novel, impressive, or even more than mildly interesting at these events anymore. The blockchain has not revolutionized society, no matter what some bros would have you believe, nor has 3D-printing. Self-driving cars are still years away, AI is still mainly theoretical, and no one buys VR headsets. But these are the terms you’ll find associated with these events if you Google them.
There's nothing of any real substance being launched at this big shiny events:
The biggest thing people will remember from this year’s CES is that it rained the first few days and then the power went out. From MWC, it’ll be that it snowed for the first time in years in Barcelona, and from SXSW, it’ll be the Westworld in the desert (which was pretty cool). Quickly forgotten are the second-tier phones, dating apps, and robots that do absolutely nothing useful. I saw a few things of note that point toward the future—a 3D-printed house that could actually better lives in developing nations; robots that could crush us at Scrabble—but obviously, the opportunity for a nascent startup to get its name in front of thousands of techies, influential people, and potential investors can be huge. Even if it’s just an app for threesomes.
As Murphy points out, the more important the destination (i.e. where the event is held) the less important the content (i.e. what is being announced):
When real technology is involved, the destinations aren’t as important as the substance of the events. But in the case of many of these conferences, the substance is the destinations themselves.

However, that shouldn’t necessarily be cause for concern: There is still much to be excited about in technology. You just won’t find much of it at the biggest conferences of the year, which are basically spring breaks for nerds. But there is value in bringing so many similarly interested people together.

[…]

Just don’t expect the world of tomorrow to look like the marketing stunts of today.

I see these events as a way to catch up the mainstream with what’s been happening in pockets of innovation over the past year or so. Unfortunately, this is increasingly being covered in a layer of marketing spin and hype so that it’s difficult to separate the useful from the trite.

Source: Quartz

Issue #296: Goodbye winter blues

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

On playing video games with your kids

I play ‘video games’ (a curiously old-fashioned term) with my kids all the time. Current favourites are FIFA 18 and Star Wars Battlefront II. We also enjoy Party Golf as a whole family (hilarious!)

My children play different games with each other than they play with me. They’re more likely to play Lego Worlds or Minecraft (the latter always on their tablets). And when I’m away we play word games such as Words With Friends 2 or Wordbase.

The author of this article, David Cole, points out that playing games with his son is a different experience than he was expecting it to be:

So when I imagined playing video games with my son — now 6 — I pictured myself as being the Player 2 that I’d never had in my own childhood. I wouldn’t mind which games he wanted to play, or how many turns he’d take. I would comfort him through frustrating losses and be a good sport when we competed head-to-head. What I hadn’t anticipated in these fantasies was how much a new breed of video game would end up deeply altering the way we relate. Games of challenge and reflex are still popular of course, but among children my son’s age they’ve been drastically overtaken by a class of games defined by open-ended, expressive play. The hallmark title of this sort is, undeniably, Minecraft.

My son is 11 years old and my daughter seven, so what Cole describes resonates:

My son and I do still play those competitive games, and I hope that he’s learning about practice and perseverance when we do. But those games are about stretching and challenging him to fit the mold of the game’s demands. When we play Minecraft together, the direction of his development, and thus our relationship, is reversed: He converts the world into expressions of his own fantasies and dreams. And by letting me enter and explore those dream worlds with him, I come to understand him in a way that the games from my childhood do not.

The paragraph that particularly resonated with me was this one, as it not only describes my relationship with my children when playing video games, but also parenting as being vastly different (for better and worse) than I thought it would be:

The working rhythms of our shared play allow for stretches of silent collaboration. It’s in these contemplative moments that I notice how distinct this feeling is from my own childhood, as well as the childhood I had predicted for my son. I thought I would be his Player 2, an ideal peer that would make his childhood awesome in ways that mine was not. In retrospect, that was always just a picture of me, not of him and not of us.

A lovely article that reminded me of the heartwarming Player 2 video short based on a true story from a YouTube comments section...

Source: The Cut

Browser extensions FTW

Last week, the New York Times issued a correction to an article written by Justin Bank about President Trump. This was no ordinary correction, however:

Because of an editing error involving a satirical text-swapping web browser extension, an earlier version of this article misquoted a passage from an article by the Times reporter Jim Tankersley. The sentence referred to America’s narrowing trade deficit during “the Great Recession,” not during “the Time of Shedding and Cold Rocks.” (Pro tip: Disable your “Millennials to Snake People” extension when copying and pasting.)
Social networks went crazy over it. 😂

The person responsible has written an excellent follow-up article about the joys of browser extensions:

Browser extensions, when used properly and sensibly, can make your internet experience more productive and efficient. They can make thesaurus recommendations, more accessible, create to-do lists in new tabs, or change the color scheme of web pages to make them more readable.
The examples given by the author are all for the Chrome web browser, but all modern browsers have extensions:
Unfortunately — if somewhat comically — my use of that extension last week was far from joyful or efficient. But, despite my embarrassment to have distracted from the good work of my colleagues, I still passionately recommend the subversive, web-altering extensions you can find in a category the Chrome Web Store lists as "fun".
Here's my three favourite of the ones he lists in the article (which, as ever, I suggest you check out in full): I'm particularly pleased to have come across the Word Replacer (Chrome) extension which allows you to effectively make your own extension. But as the author notes, be careful of the consequences when copy/pasting...

Source: The New York Times

The tenets of 'Slow Thought'

The slow movement began with ‘slow food’ which was in opposition to, unsurprisingly, ‘fast food’. Since then there’s been, with greater and lesser success, ‘slow’ versions of many things: education, cinema, religion… you name it.

In this article, the author suggests ‘slow thought’. Unfortunately, the connotation around ‘slow thinking’ is already negative so I don’t think the manifesto they provide will catch on. They also quote French philosophers…

In the tradition of the Slow Movement, I hereby declare my manifesto for ‘Slow Thought’. This is the first step toward a psychiatry of the event, based on the French philosopher Alain Badiou’s central notion of the event, a new foundation for ontology – how we think of being or existence. An event is an unpredictable break in our everyday worlds that opens new possibilities. The three conditions for an event are: that something happens to us (by pure accident, no destiny, no determinism), that we name what happens, and that we remain faithful to it. In Badiou’s philosophy, we become subjects through the event. By naming it and maintaining fidelity to the event, the subject emerges as a subject to its truth. ‘Being there,’ as traditional phenomenology would have it, is not enough. My proposal for ‘evental psychiatry’ will describe both how we get stuck in our everyday worlds, and what makes change and new things possible for us.
That being said, if only the author could state them more simple and standalone, I think the 'seven proclamations' do have value:
  1. Slow Thought is marked by peripatetic Socratic walks, the face-to-face encounter of Levinas, and Bakhtin’s dialogic conversations
  2. Slow Thought creates its own time and place
  3. Slow Thought has no other object than itself
  4. Slow Thought is porous
  5. Slow Thought is playful
  6. Slow Thought is a counter-method, rather than a method, for thinking as it relaxes, releases and liberates thought from its constraints and the trauma of tradition
  7. Slow Thought is deliberate
Isn't this just Philosophy? In any case, my favourite paragraph is probably this one:
Slow Thought is a porous way of thinking that is non-categorical, open to contingency, allowing people to adapt spontaneously to the exigencies and vicissitudes of life. Italians have a name for this: arrangiarsi – more than ‘making do’ or ‘getting by’, it is the art of improvisation, a way of using the resources at hand to forge solutions. The porosity of Slow Thought opens the way for potential responses to human predicaments.
We definitely need more 'arrangiarsi' in the world.

Source: Aeon

 

Wisdom and riches

“When a young man was boasting in the theatre and saying, I am wise, for I have conversed with many wise men; Epictetus said, I also have conversed with many rich men, but am not rich.”

(from ‘Fragments of Epictetus’)

Different ways of knowing

The Book of Life from the School of Life is an ever-expanding treasure trove of wisdom. In this entry, entitled Knowing Things Intellectually vs. Knowing Them Emotionally the focus is on different ways of how we ‘know’ things:

An intellectual understanding of the past, though not wrong, won’t by itself be effective in the sense of being able to release us from the true intensity of our neurotic symptoms. For this, we have to edge our way towards a far more close-up, detailed, visceral appreciation of where we have come from and what we have suffered. We need to strive for what we can call an emotional understanding of the past – as opposed to a top-down, abbreviated intellectual one.
I've no idea about my own intellectual abilities, although I guess I do have a terminal degree. What I do know is that I've spoken with many smart people who, like me, have found it difficult to deal with emotions such as anxiety. There's definitely a difference between 'knowing' as in 'knowing what's wrong with you' and 'knowing how to fix it'.
Psychotherapy has long recognised this distinction. It knows that thinking is hugely important – but on its own, within the therapeutic process itself, it is not the key to fixing our psychological problems.

[…]

Therapy builds on the idea of a return to live feelings. It’s only when we’re properly in touch with feelings that we can correct them with the help of our more mature faculties – and thereby address the real troubles of our adult lives.

The article has threaded through it the example of having an abusive relationship as a child. Thankfully, I didn’t experience that, but it does make a great suggestion that finding the source of one’s anxiety and fully experiencing the emotion at its core might be helpful.

And it is on the basis of this kind of hard-won emotional knowledge, not its more painless intellectual kind, that we may one day, with a fair wind, discover a measure of relief for some of the troubles within.
Source: The Book of Life

Beginning and middle

“Don’t compare your beginning to someone else’s middle."

(Jon Acuff)

Slack's bait-and-switch?

I remember the early days of Twitter. It was great, as there were many different clients, both native apps and web-based ones. There was lots of innovation in the ecosystem and, in fact, the ‘pull-to-refresh’ feature that’s now baked into every social app on a touchscreen device was first created for a third-party Twitter client.

Twitter then, of course, once it had reached critical mass and mainstream adoption, killed off that third party ecosystem to ‘own the experience’. It looks like Slack, the messaging app for teams, is doing something similar by turning off support for IRC and XMPP gateways:

As Slack has evolved over the years, we’ve built features and capabilities — like Shared Channels, Threads, and emoji reactions (to name a few) — that the IRC and XMPP gateways aren’t able to handle. Our priority is to provide a secure and high-quality experience across all platforms, and so the time has come to close the gateways.
A number of people weren't happy about this, notably those who rely on the superior accessibility features available through IRC and XMPP. A software developer and consultant by the name of JC Brand takes Slack to task:
We all know the real reason Slack has closed off their gateways. Their business model dictates that they should.

Slack’s business model is to record everything said in a workspace and then to sell you access to their record of your conversations.

They’re a typical walled garden, information silo or Siren Server

So they have to close everything off, to make sure that people can’t extract their conversations out of the silo.

We saw it with Google, who built Gtalk on XMPP and even federated with other XMPP servers, only to later stop federation and XMPP support in favour of trying to herd the digital cattle into the Google+ enclosure.

Facebook, who also built their chat app on XMPP at first allowed 3rd party XMPP clients to connect and then later dropped interoperability.

Twitter, although not using or supporting XMPP, had a vibrant 3rd party client ecosystem which they killed off once they felt big enough.

Slack, like so many others before them, pretend to care about interoperability, opening up just so slightly, so that they can lure in people with the promise of “openness”, before eventually closing the gate once they’ve achieved sufficient size and lock-in.

I’m definitely on the side of open source people/projects here, but it’s worth noting that the author uses the post to promote the solution he’s been developing. And why not?

There’s a comment below the post which makes, I think, a good point:

I'm betting this decision wasn't made by the same folks who were at Slack (or Facebook, Google, etc) and thought adding support for the open protocols was a good thing. I bet the decision is a product of time over any attempt to trick anyone. Over time people change roles, leave, and slowly new leadership emerges. Outside pressures (market growth, investors) require a change in priority and the org shifts away from supporting things that had low adoption and ongoing maintenance cost.

So I don’t think it’s as malicious as the author implies (Bait and Switch) as that requires some nefarious planning and foresight. I think it’s more likely to be business/product evolution, which still sucks for adopters and the free net, but not as maleficent. Just, unfortunately, the nature of early tech businesses maturing into Just Another Business.

Indeed.

Source: Opkode

The security guide as literary genre

I stumbled across this conference presentation from back in January by Jeffrey Monro, “a doctoral student in English at the University of Maryland, College Park, where [he studies] the textual and material histories of media technologies”.

It’s a short, but very interesting one, taking a step back from the current state of play to ask what we’re actually doing as a society.

Over the past year, in an unsurprising response to a host of new geopolitical realities, we’ve seen a cottage industry of security recommendations pop up in venues as varied as The New York TimesVice, and even Teen Vogue. Together, these recommendations form a standard suite of answers to some of the most messy questions of our digital lives. “How do I stop advertisers from surveilling me?” “How do I protect my internet history from the highest bidder?” And “how do I protect my privacy in the face of an invasive or authoritarian government?”
It's all very well having a plethora of guides to secure ourselves against digital adversaries, but this isn't something that we need to really think about in a physical setting within the developed world. When I pop down to the shops, I don't think about the route I take in case someone robs me at gunpoint.

So Monro is thinking about these security guides as a kind of ‘literary genre’:

I’m less interested in whether or not these tools are effective as such. Rather, I want to ask how these tools in particular orient us toward digital space, engage imaginaries of privacy and security, and structure relationships between users, hackers, governments, infrastructures, or machines themselves? In short: what are we asking for when we construe security as a browser plugin?
There's a wider issue here about the pace of digital interactions, security theatre, and most of us getting news from an industry hyper-focused on online advertising. A recent article in the New York Times was thought-provoking in that sense, comparing what it's like going back to (or in some cases, getting for the first time) all of your news from print media.

We live in a digital world where everyone’s seemingly agitated and angry, all of the time:

The increasing popularity of these guides evinces a watchful anxiety permeating even the most benign of online interactions, a paranoia that emerges from an epistemological collapse of the categories of “private” and “public.” These guides offer a way through the wilderness, techniques by which users can harden that private/public boundary.
The problem with this 'genre' of security guide, says Monro, is that even the good ones from groups like EFF (of which I'm a member) make you feel like locking down everything. The problem with that, of course, is that it's very limiting.
Communication, by its very nature, demands some dimension of insecurity, some material vector for possible attack. Communication is always already a vulnerable act. The perfectly secure machine, as Chun notes, would be unusable: it would cease to be a computer at all. We can then only ever approach security asymptotically, always leaving avenues for attack, for it is precisely through those avenues that communication occurs.
I'm a great believer in serendipity, but the problem with that from a technical point of view is that it increases my attack surface. It's a source of tension that I actually feel most days.
There is no room, or at least less room, in a world of locked-down browsers, encrypted messaging apps, and verified communication for qualities like serendipity or chance encounters. Certainly in a world chock-full with bad actors, I am not arguing for less security, particularly for those of us most vulnerable to attack online... But I have to wonder how our intensive speculative energies, so far directed toward all possibility for attack, might be put to use in imagining a digital world that sees vulnerability as a value.
At the end of the day, this kind of article serves to show just how different our online, digital environment is from our physical reality. It's a fascinating sideways look, looking at the security guide as a 'genre'. A recommended read in its entirety — and I really like the look of his blog!

Source: Jeffrey Moro

Do the thing

“Do the thing you think you cannot do."

(Eleanor Roosevelt)

Memento mori

As I’ve mentioned before on Thought Shrapnel, next to my bed I have a memento mori, an object that reminds me that one day I will die.

My friend Ian O’Byrne had some sad news last week: his grandmother died. However, in an absolutely fantastic and very well-written post he wrote in the aftermath, he mentioned how meditating regularly on death, and having a memento mori has really helped him to live his life to the fullest.

I believe that it is reminders like this one that we desperately need in our own lives. It seems like a normal practice that may of us would rather ignore death, or do everything to avoid and pretend is not true. It may be the root of ego that causes us to run away from anything that reminds us of this reality. As a safety mechanism, we build this comfortable narrative that avoids this tough subject.

We also at times simply refuse to look at life as it is. We’re scared to meditate and reflect on the fact that we are all going to die. Just the fact that I wrote this post, and you’re reading it may strike you as a bit dark and macabre.

With all of our technological, surgical, and pharmaceutical inventions and devices, we expect, almost demand, to live a long life, live it in good health and look good doing it. We live in denial that we will die. But, previous civilizations were acutely aware of their own mortality. Memento mori was the philosophy of reflecting on your own death as a form of spiritual improvement, and rejecting earthly vanities.

So having a memento mori isn't morbid, it's actually a symbol that you're looking to maximise your time here on earth. When I used a Mac, I had a skull icon at the top of the dock on the left-hand side of my screen.

Ian suggests some alternatives:

There are multiple ways to include this process of memento mori in your life. For some, it is as simple as including artwork and symbols in your home and daily interactions. These may be symbols of mortality which encourage reflection on the meaning and fleetingness of life. In my home we have skulls in various pieces of art and sculptures that help serve as a reminder.

I had opportunity last week to revisit Buster Benson's 2013 influential post Live Like a Hydra. In it, he references an experiment he called If I Lived 100 Times whereby he modelled life expectancy data for someone his age. It's interesting reading and certainly makes you think. How many books will you read before you die? How many new countries will you travel to? It makes you think.

Back to Ian’s article and he turns to the Stoic philosopher Epictetus for some advice:

Memento mori is an opportunity, should you take it, to reflect on the invigorating and humbling aspects of life. By no means am I an expert on this. I still struggle daily with understanding my role and mission in life. In these struggles, I also need to remember that I may not wake up tomorrow. As stated by Epictetus, “Keep death and exile before your eyes each day, along with everything that seems terrible— by doing so, you’ll never have a base thought nor will you have excessive desire.” These opportunities to reflect and meditate provide an opportunity to create and enjoy the life you want.

Wise words indeed.

Source: W. Ian O’Byrne

Microcast #005

[audio src=“http://188.166.96.48/wp-content/uploads/2018/03/episode-005.mp3”][/audio]

Thinking through an approach to building Project MoodleNet that came to me this weekend, using Google search, Amazon filtering, and the Pinterest browser button as mental models.

Links:

Issue #295: A wee problem...

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Living an antifragile life

Nassim Nicholas Taleb’s new book is out, which made me think about his previous work, Antifragile (which I enjoyed greatly).

As Shane Parrish quotes in a 2014 article on the subject, Taleb defines antifragility in the following way:

Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile. Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better. This property is behind everything that has changed with time: evolution, culture, ideas, revolutions, political systems, technological innovation, cultural and economic success, corporate survival, good recipes (say, chicken soup or steak tartare with a drop of cognac), the rise of cities, cultures, legal systems, equatorial forests, bacterial resistance … even our own existence as a species on this planet.
This definition, and the examples Taleb pointed to in his book helped me understand the world a bit better. It's easy to point to entitled people and see how they manage to get richer no matter what happens. But I think we all know people (and in fact companies, organisations, and communities) that are just set up for success. The notion of them being 'antifragile' helps describe that.

Parrish quotes Buster Benson who boils Taleb’s book down to one general, underlying principle:

Play the long game, keep your options open and avoid total failure while trying lots of different things and maintaining an open mind.
More specifically, Benson notes Taleb's 10 principles of antifragility:
  1. Stick to simple rules
  2. Build in redundancy and layers (no single point of failure)
  3. Resist the urge to suppress randomness
  4. Make sure that you have your soul in the game
  5. Experiment and tinker — take lots of small risks
  6. Avoid risks that, if lost, would wipe you out completely
  7. Don’t get consumed by data
  8. Keep your options open
  9. Focus more on avoiding things that don’t work than trying to find out what does work
  10. Respect the old — look for habits and rules that have been around for a long time
Some great suggestions here, and I'm very much looking forward to reading Taleb's new book. As a bonus, in putting together this post I discovered that, after jobs at Twitter, Slack, and Amazon, Buster Benson is writing a book. He's looking for 100 supporters at $1 a month so I didn't even think twice and pledged!

Source: Farnam Street

The end/beginning

“Often when you think you’re at the end of something, you’re at the beginning of something else."

(Fred Rogers)

Archives of Radical Philosophy

A quick one to note that the entire archive (1972-2018) of Radical Philosophy is now online. It describes itself as a “UK-based journal of socialist and feminist philosophy” and there’s articles in there from Pierre Bourdieu, Judith Butler, and Richard Rorty.

If nothing else, these essays and many others should upend facile notions of leftist academic philosophy as dominated by “postmodern” denials of truth, morality, freedom, and Enlightenment thought, as doctrinaire Stalinism, or little more than thought policing through dogmatic political correctness. For every argument in the pages of Radical Philosophy that might confirm certain readers' biases, there are dozens more that will challenge their assumptions, bearing out Foucault’s observation that “philosophy cannot be an endless scrutiny of its own propositions.”
That's my bedtime reading sorted for the foreseeable, then...

Source: Open Culture

Do the tools you use matter?

An interesting post from Austin Kleon on whether tools matter. It was prompted by the image accompanying this post, which met with some objections when he shared it with others:

On my Instagram, a follower was very upset with the above cartoon, saying it was “mean” and “hurtful” and not smart and ungrateful to my fans, and that I should try to “remember what it was like to be a beginner.”
He defends his position, partly by telling stories, but also by stating:
There are actually very good reasons for not wanting to teach young artists. There are good reasons for not answering a question like, “What brand of pen do you use?” or questions about process at all.

If you are just starting off and I tell you exactly how I work, right down to the brand of pen and notebook, I am, in a some small sense, robbing you of the experience of finding your own materials and your own way of working.

It’s been interesting seeing Bryan Mathers' journey over the last five years. I’ve seen him go from using basic apps which work ‘just fine’ to reaching the limits of those and having to upgrade to more powerful stuff. That’s a voyage of discovery, but along the way it’s absolutely useful to find out what other people use.

Kleon points out that we can do better than tool-related questions:

So, yes, the tools matter, but again, it’s all about what you are trying to achieve. So a question like, “What brand of pen do you use?” is not as good as “How do you get that thick line quality?” or “How do you dodge Writer’s Block?”
I'm a fan of a great site called Uses This (formerly 'The Setup') which asks a range of people the hardware and software they use to get stuff done. The interviews are always structured around the same four questions, but the best responses are ones that take the idea and run with it a bit.

Note to self: update the version of this I did back in 2011.

Source: Austin Kleon

Is your smartphone a very real part of who you are?

I really enjoy Aeon’s articles, and probably should think about becoming a paying subscriber. They make me think.

This one is about your identity and how much of it is bound up with your smartphone:

After all, your smartphone is much more than just a phone. It can tell a more intimate story about you than your best friend. No other piece of hardware in history, not even your brain, contains the quality or quantity of information held on your phone: it ‘knows’ whom you speak to, when you speak to them, what you said, where you have been, your purchases, photos, biometric data, even your notes to yourself – and all this dating back years.
I did some work on mind, brain, and personal identity as part of my undergraduate studies in Philosophy. I'm certainly sympathetic to the argument that things outside our body can become part of who we are:
Andy Clark and David Chalmers... argued in ‘The Extended Mind’ (1998) that technology is actually part of us. According to traditional cognitive science, ‘thinking’ is a process of symbol manipulation or neural computation, which gets carried out by the brain. Clark and Chalmers broadly accept this computational theory of mind, but claim that tools can become seamlessly integrated into how we think. Objects such as smartphones or notepads are often just as functionally essential to our cognition as the synapses firing in our heads. They augment and extend our minds by increasing our cognitive power and freeing up internal resources.
So if you've always got your smartphone with you, it's possible to outsource things to it. For example, you don't have to remember so many things, you just need to know how to retrieve them. In the age of voice assistants, that becomes ever-easier.

This is known as the ‘extended mind thesis’.

This line of reasoning leads to some potentially radical conclusions. Some philosophers have argued that when we die, our digital devices should be handled as remains: if your smartphone is a part of who you are, then perhaps it should be treated more like your corpse than your couch. Similarly, one might argue that trashing someone’s smartphone should be seen as a form of ‘extended’ assault, equivalent to a blow to the head, rather than just destruction of property. If your memories are erased because someone attacks you with a club, a court would have no trouble characterising the episode as a violent incident. So if someone breaks your smartphone and wipes its contents, perhaps the perpetrator should be punished as they would be if they had caused a head trauma.
These are certainly questions I'm interested in. I've seen some predictions that Philosphy graduates are going to be earning more than Computer Science graduates in a decade's time. I can see why (and I certainly hope so!)

Source: Aeon

Microcast #004

[audio src=“http://188.166.96.48/wp-content/uploads/2018/03/microcast-004.mp3”][/audio]
Is it really a ‘skills gap’ that we should be talking about? What’s the real problem here?

Links:

Masterpieces

“Masterpieces are not single and solitary births; they are the outcome of many years of thinking in common, of thinking by the body of the people, so that the experience of the mass is behind the single voice.”

(Virginia Woolf)

Microcast #003

[audio src=“http://188.166.96.48/wp-content/uploads/2018/03/microcast-003.mp3”][/audio]
What technologies are going to be used with Project MoodleNet?

Links:

30,000 hours of sleep

“Those who research world-class performance focus only on what students do in the gym or track or practice room. Everyone focuses on the most obvious, measurable forms of work and tries to make these more effective and more productive. They don’t ask whether there are other ways to improve performance, and improve your life.

This is how we’ve come to believe that world-class performance comes after 10,000 hours of practice. But that’s wrong. It comes after 10,000 hours of practice, 12,500 hours of deliberate rest, and 30,000 hours of sleep.”

(Alex Soojung-Kim Pang)

Teaching kids about computers and coding

Not only is Hacker News a great place to find the latest news about tech-related stuff, it’s also got some interesting ‘Ask HN’ threads sourcing recommendations from the community.

This particular one starts with a user posing the question:

Ask HN: How do you teach you kids about computers and coding?

Please share what tools & approaches you use - it may Scratch, Python, any kids specific like Linux distros, Raspberry Pi or recent products like Lego Boost… Or your experiences with them.. thanks.

Like sites such as Reddit and Stack Overflow, responses are voted up based on their usefulness. The most-upvoted response was this one:

My daughter is almost 5 and she picked up Scratch Jr in ten minutes. I am writing my suggestions mostly from the context of a younger child.

I approached it this way, I bought a book on Scratch Jr so I could get up to speed on it. I walked her through a few of the basics, and then I just let her take over after that.

One other programming related activity we have done is the Learning Resources Code & Go Robot Mouse Activity. She has a lot of fun with this as you have a small mouse you program with simple directions to navigate a maze to find the cheese. It uses a set of cards to help then grasp the steps needed. I switch to not using the cards after a while. We now just step the mouse through the maze manually adding steps as we go.

One other activity to consider is the robot turtles board game. This teaches some basic logic concepts needed in programming.

For an older child, I did help my nephew to learn programming in Python when he was a freshman in high school. I took the approach of having him type in games from the free Python book. I have always though this was a good approach for older kids to get the familiar with the syntax.

Something else I would consider would be a robot that can be programmer with Scratch. While I have not done this yet, I think for kid seeing the physical results of programming via a robot is a powerful way to capture interest.

But I think my favourite response is this one:

What age range are we talking about? For most kids aged 6-12 writing code is too abstract to start with. For my kids, I started making really simple projects with a Makey Makey. After that, I taught them the basics with Scratch, since there are tons of fun tutorials for kids. Right now, I'm building a Raspberry Pi-powered robot with my 10yo (basically it's a poor man's Lego Mindstorm).

The key is fun. The focus is much more on ‘building something together’ than ‘I’ll learn you how to code’. I’m pretty sure that if I were to press them into learning how to code it will only put them off. Sometimes we go for weeks without building on the robot, and all of the sudden she will ask me to work on it with her again.

My son is sailing through his Computer Science classes at school because of some webmaking and ‘coding’ stuff we did when he was younger. He’s seldom interested, however, if I want to break out the Raspberry Pi and have a play.

At the end of the day, it’s meeting them where they’re at. If they show an interest, run with it!

Source: Hacker News

Microcast #002

Building a bridge

“I learned that a long walk and calm conversation are an incredible combination if you want to build a bridge.”

(Seth Godin)

The three things you need to make friends over the age of 30

This article from 2012 was referenced in something I was reading last week:

As external conditions change, it becomes tougher to meet the three conditions that sociologists since the 1950s have considered crucial to making close friends: proximity; repeated, unplanned interactions; and a setting that encourages people to let their guard down and confide in each other, said Rebecca G. Adams, a professor of sociology and gerontology at the University of North Carolina at Greensboro. This is why so many people meet their lifelong friends in college, she added.
I've never particularly had wide group of friends, even a child. Acquaintances, absolutely. I was on the football team and reasonably popular, it's just that I can be what some people would term 'emotionally distant'.

But making friends in your thirties seems to be something that’s difficult for many people. Not that I’m overly-concerned about it, to be honest. A good Stoic should be self-contained.

The article makes a good point about differences that don’t seem to matter when people are younger. For example, coming from a wealthy family (or having a job that pays well) seems to somehow play a bigger role.

And then…

Adding children to the mix muddles things further. Suddenly, you are surrounded by a new circle of parent friends — but the emotional ties can be tenuous at best, as the comedian Louis C. K. related in one stand-up routine: “I spend whole days with people, I’m like, I never would have hung out with you, I didn’t choose you. Our children chose each other. Based on no criteria, by the way. They’re the same size.”
Indeed, although there's some really interesting people I've met through my children. I wouldn't particularly call those people friends, though. Perhaps I set the bar too high?

Ultimately, though, there’s more at work here than just life changes happening to us.

External factors are not the only hurdle. After 30, people often experience internal shifts in how they approach friendship. Self-discovery gives way to self-knowledge, so you become pickier about whom you surround yourself with, said Marla Paul, the author of the 2004 book The Friendship Crisis: Finding, Making, and Keeping Friends When You’re Not a Kid Anymore. “The bar is higher than when we were younger and were willing to meet almost anyone for a margarita,” she said.

Manipulators, drama queens, egomaniacs: a lot of them just no longer make the cut.

Well, exactly. And I think things are different for men and women (as well as, I guess, those who don’t strongly identify as either).

Source: The New York Times

Microcast #001

[audio src="http://188.166.96.48/wp-content/uploads/2018/03/episode-001.mp3"][/audio]

What is microcasting? Why has it suddenly appeared on Thought Shrapnel? Whose voice will you hear? How often will one of these appear in the stream? How are these produced?

Links:

Issue #294: Snowmaggedon ❄️

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Happiness

“Happiness is where you find it, not where you go in search of it.”

(John Kay)

Microcast #000

[audio src=“http://188.166.96.48/wp-content/uploads/2018/03/episode-000.mp3”][/audio]
Just setting this thing up with the assistance of my two children…

Tact

“Tact is the art of making a point without making an enemy.

(Sir Isaac Newton)

Geek social fallacies

I came across this via a chain of links that took me down a rabbithole. I’m pretty sure it started with an article referenced on Hacker News, but I’m not sure.

In any case, I thought it was pretty interesting. Basically someone who self-identifies as a geek giving other geeks some advice. Having said that, it’s probably applicable more widely than that, particularly among men.

Here’s a taste:

Within the constellation of allied hobbies and subcultures collectively known as geekdom, one finds many social groups bent under a crushing burden of dysfunction, social drama, and general interpersonal wack-ness. It is my opinion that many of these never-ending crises are sparked off by an assortment of pernicious social fallacies -- ideas about human interaction which spur their holders to do terrible and stupid things to themselves and to each other.
There's a list of five such fallacies, my favourite being:

Geek Social Fallacy #4: Friendship Is Transitive

Every carrier of GSF4 has, at some point, said:

“Wouldn’t it be great to get all my groups of friends into one place for one big happy party?!”

If you groaned at that last paragraph, you may be a recovering GSF4 carrier.

GSF4 is the belief that any two of your friends ought to be friends with each other, and if they’re not, something is Very Wrong.

The milder form of GSF4 merely prevents the carrier from perceiving evidence to contradict it; a carrier will refuse to comprehend that two of their friends (or two groups of friends) don’t much care for each other, and will continue to try to bring them together at social events. They may even maintain that a full-scale vendetta is just a misunderstanding between friends that could easily be resolved if the principals would just sit down to talk it out.

A more serious form of GSF4 becomes another “friendship test” fallacy: if you have a friend A, and a friend B, but A & B are not friends, then one of them must not really be your friend at all. It is surprisingly common for a carrier, when faced with two friends who don’t get along, to simply drop one of them.

On the other side of the equation, a carrier who doesn’t like a friend of a friend will often get very passive-aggressive and covertly hostile to the friend of a friend, while vigorously maintaining that we’re one big happy family and everyone is friends.

GSF4 can also lead carriers to make inappropriate requests of people they barely know – asking a friend’s roommate’s ex if they can crash on their couch, asking a college acquaintance from eight years ago for a letter of recommendation at their workplace, and so on. If something is appropriate to ask of a friend, it’s appropriate to ask of a friend of a friend.

Arguably, Friendster was designed by a GSF4 carrier.

Hilarious and insightful at the same time.

Source: Plausibly Deniable

Google's new Slack competitor

How many failed ‘social’ and ‘chat’ products has Google racked up now? Despite that, their new Slack competitor, Hangouts Chat looks promising:

To be clear, Hangouts Chat is a totally separate and distinct service from Hangouts proper, which still lives in your Google mail inbox. And while we’ll forgive you for rolling your eyes at yet another chat service from Google (the number of different apps the company has built is legendary at the point), Hangouts Chat does offer something potentially valuable to companies using G Suite – assuming they’re not on Slack already.

Words
Given Google's focus on AI across basically all of its products, it's no surprise that Hangouts Chat will use machine learning to try and figure out what users might need. Specifically, Google says AI will help book meeting rooms, find files "and more." Specifically, a link between Chat and Calendar will learn how to suggest locations to book by analyzing attendees' "building and floor location, previous booking history, audio/video equipment needs and room capacity requirements." It's hard to say how well this will work — but anyone working in a semi-large company also knows that booking a meeting room likely can't get any worse than it is right now.
I'm looking forward to giving this a try, particularly if they've learned from some of the problems that come with Slack. Also, with GDPR being enforced soon, I'm more OK with sharing more of my data with Google. I even bought a Chromebox this week...

Source: Engadget

10 breakthrough technologies for 2018

I do like MIT’s Technology Review. It gives a glimpse of cool future uses of technology, while retaining a critical lens.

Every year since 2001 we’ve picked what we call the 10 Breakthrough Technologies. People often ask, what exactly do you mean by “breakthrough”? It’s a reasonable question—some of our picks haven’t yet reached widespread use, while others may be on the cusp of becoming commercially available. What we’re really looking for is a technology, or perhaps even a collection of technologies, that will have a profound effect on our lives.
Here's the list of their 'breakthrough technologies' for 2018:
  1. 3D metal printing
  2. Artificial embryos
  3. Sensing city
  4. AI for everybody
  5. Dueling neural networks
  6. Babel-fish earbuds
  7. Zero-carbon natural gas
  8. Perfect online privacy
  9. Genetic fortune-telling
  10. Materials' quantum leap
It's a fascinating list, partly because of the names they've given ('genetic fortune telling'!) to things which haven't really been given a mainstream label yet. Worth exploring in more details, as they flesh out each on of these in what is a reasonably lengthy article.

Source: MIT Technology Review

The moon is getting 4G

Yep, you read that headline correctly. Vodafone and Nokia are getting huge amounts of publicitly for partnering with scientists to put a 4G network on the moon.

Why? Because it takes too much power to beam back high-definition video directly from the lunar rovers to the earth. So, instead, it’ll be relayed over a data network on the moon and then transmitted back to earth.

It’s totally a marketing thing for Vodafone and Nokia, but it also sounds totally cool…

Source: BBC News

Possible - impossible

“The only way of finding the limits of the possible is by going beyond them into the impossible.”

(Arthur C. Clarke)

Your best decisions don't come when you demand them

As with every episode so far, I greatly enjoyed listening to a recent episode of the Hurry Slowly podcast, this time with interviewee Bill Duggan. He had some great words of wisdom to share, including:

If we’re talking about the creative side, you certainly can’t force it, and a very simple thing is you can’t solve every problem in one day. You can’t solve every problem in one week. You can’t solve every problem in one year. Some problems you just can’t solve, and you don’t know you can’t solve it until you give up trying to solve it.
He makes the point during the episode that if you know what you're doing, and have done something similar before, then there's no problem in pushing on until midnight to get stuff done. However, if you're working overtime to try and solve a problem, a lot of research suggests that you'd be better off doing something else to allow your subconscious to work on it, and spark those 'aha!' moments.

Source: Hurry Slowly

Some great links for Product Managers

As I’ve mentioned before, my new role at Moodle is essentially one of a product manager. I’ve done things which overlap the different elements of the role before but never had them in this combination:

Product managers are responsible for guiding the success of a product and leading the cross-functional team that is responsible for improving it. It is an important organizational role — especially in technology companies — that sets the strategy, roadmap, and feature definition for a product or product line. The position may also include marketing, forecasting, and profit and loss (P&L) responsibilities. In many ways, the role of a product manager is similar in concept to a brand manager at a consumer packaged goods company.
As a result, I found this list of resources from Product Manager HQ very useful. I reckon I'd come across about 50% of the tools and apps listed before, so I'm looking forward to exploring the other half!

Here’s a few that I hadn’t heard of before:

Mockingbird: Helps you you create and share clickable wireframes. Use it to make mockups of your website or application in minutes.

TinyPM: Lightweight and smart agile collaboration tool with product management, backlog, taskboard, user stories and wiki.

Roadmunk: Visual roadmap software for product management.

Sprint.ly: Agile project management software for your whole team.

UXCam: Allows you to eliminate customer struggle and improve user experience by capturing and visualizing screen video and user interaction data.

The definition at the top of this post comes from a whole guide put together for new Product Managers by Aha!

Sources: Aha! / Product Manager HQ

 

 

Firefox OS lives on in The Matrix

I still have a couple of Firefox OS phones from my time at Mozilla. The idea was brilliant: using the web as the platform for smartphones. The execution, in terms of the partnership and messaging to the market… not so great.

Last weekend, I actually booted up a device as my daughter was asking about ‘that orange phone you used to let me play with sometimes’. I noticed that Mozilla are discontinuing the app marketplace next month.

All is not lost, however, as open source projects can never truly die. This article reports on a ‘fork’ of Firefox OS being used to resurrect one of my favourite-ever phones, which was used in the film The Matrix:

Quietly, a company called KaiOS, built on a fork of Firefox OS, launched a new version of the OS built specifically for feature phones, and today at MWC in Barcelona the company announced a new wave of milestones around the effort that includes access to apps from Facebook, Twitter and Google in the form of its Voice Assistant, Google Maps, and Google Search; as well as a list of handset makers who will be using the OS in their phones, including HMD/Nokia (which announced its 8110 yesterday), Bullitt, Doro and Micromax; and Qualcomm and Spreadtrum for processing on the inside.
I think I'm going to have to buy the new version of the Nokia 8110 just... because.

Source: TechCrunch

 

The 'loudness' of our thoughts affects how we judge external sounds

This is really interesting:

The "loudness" of our thoughts -- or how we imagine saying something -- influences how we judge the loudness of real, external sounds, a team of researchers from NYU Shanghai and NYU has found.

No-one but you knows what it's like to be inside your head and be subject to the constant barrage of hopes, fears, dreams — and thoughts:
"Our 'thoughts' are silent to others -- but not to ourselves, in our own heads -- so the loudness in our thoughts influences the loudness of what we hear," says Poeppel, a professor of psychology and neural science.

Using an imagery-perception repetition paradigm, the team found that auditory imagery will decrease the sensitivity of actual loudness perception, with support from both behavioural loudness ratings and human electrophysiological (EEG and MEG) results.

“That is, after imagined speaking in your mind, the actual sounds you hear will become softer – the louder the volume during imagery, the softer perception will be,” explains Tian, assistant professor of neural and cognitive sciences at NYU Shanghai. “This is because imagery and perception activate the same auditory brain areas. The preceding imagery already activates the auditory areas once, and when the same brain regions are needed for perception, they are ‘tired’ and will respond less."

This is why meditation, both in terms of trying to still your mind, and meditating on positive things you read, is such a useful activity.

As anyone who’s studied philosophy, psychology, and/or neuroscience knows, we don’t experience the world directly, but find ways to interpret the “bloomin' buzzin' confusion”:

According to Tian, the study demonstrates that perception is a result of interaction between top-down (e.g. our cognition) and bottom-up (e.g. sensory processing of external stimulation) processes. This is because human beings not only receive and analyze upcoming external signals passively, but also interpret and manipulate them actively to form perception.
Source: Science Daily

Issue #293: Making cheese grate again

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Arbitrary deadlines are the enemy of creativity

People like deadlines because people like accountability. There’s nothing wrong with that, apart from the fact that sometimes it’s impossible to know how long something will take, or cost, or even look like in advance. Creativity, in other words, is at odds with arbitrary deadlines:

We may tease them for their diva-like behaviors when they feel persecuted by a deadline, but we have to admit that “develop an amazing new idea” is not something that slides into your schedule, like pick up lunch or respond to new clients. Nor can systems be tweaked and extra hands hired to help hit a goal that requires innovation, the way they can when mundane busy work is piling up. And yet deadlines are a fact of life for any company that wants to stay competitive.
Time is a human construct, not something that's objectively 'out there' in the world. As a result it can be interpreted differently in various situations:
Creative work operates on “event time,” meaning it always requires as much time as needed to organically get the job done. (Think of novel writers or other artists.) Other types of work operate on “clock time,” and are aligned with scheduled events. (A teacher obeys classroom hours and the semester calendar, for instance. An Amazon warehouse manager knows the number of customer orders that can be fulfilled in an hour.)
I don't particularly like the phrase 'creative people' in this article, as I believe everyone is (or at least can be) creative. Having said that, I agree with the sentiment:
Creative people need another scarce commodity: mental space. Working in a large team and constantly collaborating as a group doesn’t allow a person the clarity of mind to solve problems with fresh ingenious ideas. “Alone time or working with just one close collaborator seemed to be the key under the low time pressure conditions,” says Amabile.

Creative people, she adds, “have to be protected. They have to be isolated in a way, from all the other stuff that comes up during a work day. They can’t be called into meetings that are unrelated to this serious problem that they’re trying to address.”

Source: Quartz

Arbitrary deadlines are the enemy of creativity

People like deadlines because people like accountability. There’s nothing wrong with that, apart from the fact that sometimes it’s impossible to know how long something will take, or cost, or even look like in advance. Creativity, in other words, is at odds with arbitrary deadlines:

We may tease them for their diva-like behaviors when they feel persecuted by a deadline, but we have to admit that “develop an amazing new idea” is not something that slides into your schedule, like pick up lunch or respond to new clients. Nor can systems be tweaked and extra hands hired to help hit a goal that requires innovation, the way they can when mundane busy work is piling up. And yet deadlines are a fact of life for any company that wants to stay competitive.
Time is a human construct, not something that's objectively 'out there' in the world. As a result it can be interpreted differently in various situations:
Creative work operates on “event time,” meaning it always requires as much time as needed to organically get the job done. (Think of novel writers or other artists.) Other types of work operate on “clock time,” and are aligned with scheduled events. (A teacher obeys classroom hours and the semester calendar, for instance. An Amazon warehouse manager knows the number of customer orders that can be fulfilled in an hour.)
I don't particularly like the phrase 'creative people' in this article, as I believe everyone is (or at least can be) creative. Having said that, I agree with the sentiment:
Creative people need another scarce commodity: mental space. Working in a large team and constantly collaborating as a group doesn’t allow a person the clarity of mind to solve problems with fresh ingenious ideas. “Alone time or working with just one close collaborator seemed to be the key under the low time pressure conditions,” says Amabile.

Creative people, she adds, “have to be protected. They have to be isolated in a way, from all the other stuff that comes up during a work day. They can’t be called into meetings that are unrelated to this serious problem that they’re trying to address.”

Source: Quartz

Small 'b' blogging

I’ve been a blogger for around 13 years now. What the author of this post says about its value really resonates with me:

Small b blogging is learning to write and think with the network. Small b blogging is writing content designed for small deliberate audiences and showing it to them. Small b blogging is deliberately chasing interesting ideas over pageviews and scale. An attempt at genuine connection vs the gloss and polish and mass market of most “content marketing”.
He talks about the 'topology' of blogging changing over the years:
Crucially, these entry points to the network were very big and very accessible. What do I mean by that? Well - in those early days they were very big in the sense that if you got your content on the Digg homepage a lot of people would see it (relative to the total size of the network at the time). And they were very accessible in the sense that it wasn’t that hard to get your content there! I recall having a bunch of Digg homepage hits and Hacker News homepage hits.
I once had 15,000 people read a post of mine within a 24 hour period via a link from Hacker News. Yet the number of people who did something measurable (got in touch, subscribed to my newsletter, etc. ) was effectively zero.
Every community now has a fragmented number of communities, homepages, entry points, tinyletters, influencers and networks. They overlap in weird and wonderful ways - and it means that it’s harder than ever to feel like you got a “homepage” success on these networks. To create a moment that has the whole audience looking at the same thing at the same time.
We shouldn't write for page views and fame, but instead to create value. Just this week I've had people cite back to me posts I wrote years ago. It's a great thing.
So I challenge you to think clearly about the many disparate networks you’re part of and think about the ideas you might want to offer those networks that you don’t want to get lost in the feed. Ideas you might want to return to. Think about how writing with and for the network might enable you to start blogging. Forget the big B blogging model. Forget Medium’s promise of page views and claps. Forget the guest post on Inc, Forbes and Entrepreneur. Forget Fast Company. Forget fast content.
Source: Tom Critchlow

Small 'b' blogging

I’ve been a blogger for around 13 years now. What the author of this post says about its value really resonates with me:

Small b blogging is learning to write and think with the network. Small b blogging is writing content designed for small deliberate audiences and showing it to them. Small b blogging is deliberately chasing interesting ideas over pageviews and scale. An attempt at genuine connection vs the gloss and polish and mass market of most “content marketing”.
He talks about the 'topology' of blogging changing over the years:
Crucially, these entry points to the network were very big and very accessible. What do I mean by that? Well - in those early days they were very big in the sense that if you got your content on the Digg homepage a lot of people would see it (relative to the total size of the network at the time). And they were very accessible in the sense that it wasn’t that hard to get your content there! I recall having a bunch of Digg homepage hits and Hacker News homepage hits.
I once had 15,000 people read a post of mine within a 24 hour period via a link from Hacker News. Yet the number of people who did something measurable (got in touch, subscribed to my newsletter, etc. ) was effectively zero.
Every community now has a fragmented number of communities, homepages, entry points, tinyletters, influencers and networks. They overlap in weird and wonderful ways - and it means that it’s harder than ever to feel like you got a “homepage” success on these networks. To create a moment that has the whole audience looking at the same thing at the same time.
We shouldn't write for page views and fame, but instead to create value. Just this week I've had people cite back to me posts I wrote years ago. It's a great thing.
So I challenge you to think clearly about the many disparate networks you’re part of and think about the ideas you might want to offer those networks that you don’t want to get lost in the feed. Ideas you might want to return to. Think about how writing with and for the network might enable you to start blogging. Forget the big B blogging model. Forget Medium’s promise of page views and claps. Forget the guest post on Inc, Forbes and Entrepreneur. Forget Fast Company. Forget fast content.
Source: Tom Critchlow

What we can learn from Seneca about dying well

As I’ve shared before, next to my bed at home I have a memento mori, an object to remind me before I go to sleep and when I get up that one day I will die. It kind of puts things in perspective.

“Study death always,” Seneca counseled his friend Lucilius, and he took his own advice. From what is likely his earliest work, the Consolation to Marcia (written around AD 40), to the magnum opus of his last years (63–65), the Moral Epistles, Seneca returned again and again to this theme. It crops up in the midst of unrelated discussions, as though never far from his mind; a ringing endorsement of rational suicide, for example, intrudes without warning into advice about keeping one’s temper, in On Anger. Examined together, Seneca’s thoughts organize themselves around a few key themes: the universality of death; its importance as life’s final and most defining rite of passage; its part in purely natural cycles and processes; and its ability to liberate us, by freeing souls from bodies or, in the case of suicide, to give us an escape from pain, from the degradation of enslavement, or from cruel kings and tyrants who might otherwise destroy our moral integrity.
Seneca was forced to take his own life by his own pupil, the more-than-a-little-crazy Roman Emperor, Nero. However, his whole life had been a preparation for such an eventuality.
Seneca, like many leading Romans of his day, found that larger moral framework in Stoicism, a Greek school of thought that had been imported to Rome in the preceding century and had begun to flourish there. The Stoics taught their followers to seek an inner kingdom, the kingdom of the mind, where adherence to virtue and contemplation of nature could bring happiness even to an abused slave, an impoverished exile, or a prisoner on the rack. Wealth and position were regarded by the Stoics as adiaphora, “indifferents,” conducing neither to happiness nor to its opposite. Freedom and health were desirable only in that they allowed one to keep one’s thoughts and ethical choices in harmony with Logos, the divine Reason that, in the Stoic view, ruled the cosmos and gave rise to all true happiness. If freedom were destroyed by a tyrant or health were forever compromised, such that the promptings of Reason could no longer be obeyed, then death might be preferable to life, and suicide, or self-euthanasia, might be justified.
Given that death is the last taboo in our society, it's an interesting way to live your life. Being ready at any time to die, having lived a life that you're satisfied with, seems like the right approach to me.
“Study death,” “rehearse for death,” “practice death”—this constant refrain in his writings did not, in Seneca’s eyes, spring from a morbid fixation but rather from a recognition of how much was at stake in navigating this essential, and final, rite of passage. As he wrote in On the Shortness of Life, “A whole lifetime is needed to learn how to live, and—perhaps you’ll find this more surprising—a whole lifetime is needed to learn how to die.”
Source: Lapham's Quarterly

Light and deep

“Think lightly of yourself and deeply of the world."

(Miyamoto Musashi)

Anonymity vs accountability

As this article points out, organisational culture is a delicate balance between many things, including accountability and anonymity:

Though some assurance of anonymity is necessary in a few sensitive and exceptional scenarios, dependence on anonymous feedback channels within an organization may stunt the normalization of a culture that encourages diversity and community.
Anonymity can be helpful and positive:
For example, an anonymous suggestion program to garner ideas from members or employees in an organization may strengthen inclusivity and enhance the diversity of suggestions the organization receives. It would also make for a more meritocratic decision-making process, as anonymity would ensure that the quality of the articulated idea, rather than the rank and reputation of the articulator, is what's under evaluation. Allowing members to anonymously vote for anonymously-submitted ideas would help curb the influence of office politics in decisions affecting the organization's growth.
...but also problematic:
Reliance on anonymous speech for serious organizational decision-making may also contribute to complacency in an organizational culture that falls short of openness. Outlets for anonymous speech may be as similar to open as crowdsourcing is—or rather, is not. Like efforts to crowdsource creative ideas, anonymous suggestion programs may create an organizational environment in which diverse perspectives are only valued when an organization's leaders find it convenient to take advantage of members' ideas.
The author gives some advice to leaders under five sub-headings:
  1. Availability of additional communication mechanisms
  2. Failure of other communication avenues
  3. Consequences of anonymity
  4. Designing the anonymous communication channel
  5. Long-term considerations
There's some great advice in here, and I'll certainly be reflecting on it with the organisations of which I'm part.

Source: opensource.com

On your deathbed, you're not going to wish that you'd spent more time on Facebook

As many readers of my work will know, I don’t have a Facebook account. This article uses Facebook as a proxy for something that, whether you’ve got an account on the world’s largest social network or not, will be familiar:

An increasing number of us are coming to realize that our relationships with our phones are not exactly what a couples therapist would describe as “healthy.” According to data from Moment, a time-tracking app with nearly five million users, the average person spends four hours a day interacting with his or her phone.

The trick, like anything to which you're psychologically addicted, is to reframe what you're doing:

Many people equate spending less time on their phones with denying themselves pleasure — and who likes to do that? Instead, think of it this way: The time you spend on your phone is time you’re not spending doing other pleasurable things, like hanging out with a friend or pursuing a hobby. Instead of thinking of it as “spending less time on your phone,” think of it as “spending more time on your life.”

The thing I find hardest is to leave my phone in a different room, or not take it with me when I go out. There's always a reason for this (usually 'being contactable') but not having it constantly alongside you is probably a good idea:

Leave your phone at home while you go for a walk. Stare out of a window during your commute instead of checking your email. At first, you may be surprised by how powerfully you crave your phone. Pay attention to your craving. What does it feel like in your body? What’s happening in your mind? Keep observing it, and eventually, you may find that it fades away on its own.

There's a great re-adjustment happening with our attitude towards devices and the services we use on them. In a separate BBC News article, Amol Rajan outlines some reasons why Facebook usage may have actually peaked:
  1. A drop in users
  2. A drop in engagement
  3. Advertiser enmity
  4. Disinformation and fake news
  5. Former executives speak out
  6. Regulatory mood is hardening
  7. GDPR
  8. Antagonism with the news industry
Interesting times.

Source: The New York Times / BBC News

The Goldilocks Rule

In this article from 2016, James Clear investigates motivation:

Why do we stay motivated to reach some goals, but not others? Why do we say we want something, but give up on it after a few days? What is the difference between the areas where we naturally stay motivated and those where we give up?
The answer, which is obvious when we think about it, is that we need appropriate challenges in our lives:
Tasks that are significantly below your current abilities are boring. Tasks that are significantly beyond your current abilities are discouraging. But tasks that are right on the border of success and failure are incredibly motivating to our human brains. We want nothing more than to master a skill just beyond our current horizon.

We can call this phenomenonThe Goldilocks Rule. The Goldilocks Rule states that humans experience peak motivation when working on tasks that are right on the edge of their current abilities. Not too hard. Not too easy. Just right.

But he doesn’t stop there. He goes on to talk about Mihaly Csikszentmihalyi’s notion of peak performance, or ‘flow’ states:

In order to reach this state of peak performance... you not only need to work on challenges at the right degree of difficulty, but also measure your immediate progress. As psychologist Jonathan Haidt explains, one of the keys to reaching a flow state is that “you get immediate feedback about how you are doing at each step.”
Video games are great at inducing flow states; traditional classroom-based learning experiences, not so much. The key is to create these experiences yourself by finding optimum challenge and immediate feedback.

Source: Lifehacker

Showing off

“Showing off is the fool’s idea of glory."

(Bruce Lee)

On the death of Google/Apache Wave (and the lessons we can learn from it)

This article is entitled ‘How not to replace email’ and details both the demise of Google Wave and it’s open source continuation, Apache Wave:

As of a month ago, the Apache Wave project is “retired”. Few people noticed; in the seven years that Wave was an Apache Incubator open source project, it never had an official release, and was stuck at version 0.4-rc10 for the last three years.
Yes, I know! There's been a couple of times over the last few years when I've thought that Wave would have been perfect for a project I was working on. But the open source version never seemed to be 'ready'.

The world want ready for it in 2010, but now would seem to be the perfect time for something like Wave:

2017 was a year of rapidly growing interest in federated communications tools such as Mastodon, which is an alternative to Twitter that doesn’t rely on a single central corporation. So this seems like a good time to revisit an early federated attempt to reinvent how we use the internet to communicate with each other.
As the author notes, the problem was the overblown hype around it, causing Google to pull it after just three months. He quoted a friend of his who at one time was an active user:
We’d start sending messages with lots of diagrams, sketches, and stuff cribbed from Google Images, and then be able to turn those sort of longer-than-IM-shorter-than-email messages into actual design documents gradually.

In fact, I’d argue that even having a system that’s a messaging system designed for “a paragraph or two” was on its own worthwhile: even Slack isn’t quite geared toward that, and contrariwise, email […] felt more heavyweight than that. Wave felt like it encouraged the right amount of information per message.

I feel this too, and it’s actually something we’ve been talking about for internal communications at Moodle. Telegram, (which we use kind of like Slack) is good for short, sharp communication, but there’s a gulf between that and, say, an email conversation or threaded forum discussion.

Perhaps this is the sweet spot for the ‘social networking’ aspect of Project MoodleNet?

Wave’s failure didn’t have anything to do with the ideas that went into it.

Those ideas and goals are sound, and this failure even provided good evidence that there’s a real need for something kind of like Wave: fifty thousand people signed a petition to “Save Google Wave” after Google announced they were shutting Wave down. Like so many petitions, it didn’t help (obviously), but if a mediocre implementation got tens of thousands of passionate fans, what could a good implementation do?

Helpfully, the author outlines some projects he’s been part of, after stating (my emphasis):

I’d say the single most important lesson to take away here, for a technology project at least, is that interoperability is key.
  • Assume that no matter how amazing your new tech is, people are going to adopt it slowly.
  • Give your early adopters every chance you can to use your offering together with the existing tools that they will continue to need in order to work with people who haven’t caught up yet.
  • And if you’re building a communication tool, make it as simple as possible for others to build compatible tools, because they will expand the network of people your users can communicate with to populations you haven’t thought of and probably don’t understand.
It's a really useful article with many practical applications (well, for me at least...)

Source: Jamey Sharp

To lose old styles of reading is to lose a part of ourselves

Sometimes I think we’re living in the end times:

Out for dinner with another writer, I said, "I think I've forgotten how to read."

"Yes!" he replied, pointing his knife. "Everybody has."

"No, really," I said. "I mean I actually can't do it any more."

He nodded: "Nobody can read like they used to. But nobody wants to talk about it."

I wrote my doctoral thesis on digital literacies. There was a real sense in the 1990s that reading on screen was very different to reading on paper. We've kind of lost that sense of difference, and I think perhaps we need to regain it:

For most of modern life, printed matter was, as the media critic Neil Postman put it, "the model, the metaphor, and the measure of all discourse." The resonance of printed books – their lineal structure, the demands they make on our attention – touches every corner of the world we've inherited. But online life makes me into a different kind of reader – a cynical one. I scrounge, now, for the useful fact; I zero in on the shareable link. My attention – and thus my experience – fractures. Online reading is about clicks, and comments, and points. When I take that mindset and try to apply it to a beaten-up paperback, my mind bucks.

We don't really talk about 'hypertext' any more, as it's almost the default type of text that we read. As such, reading on paper doesn't really prepare us for it:

For a long time, I convinced myself that a childhood spent immersed in old-fashioned books would insulate me somehow from our new media climate – that I could keep on reading and writing in the old way because my mind was formed in pre-internet days. But the mind is plastic – and I have changed. I'm not the reader I was.

Me too. I train myself to read longer articles through mechanisms such as writing Thought Shrapnel posts and newsletters each week. But I don't read like I used to; I read for utility rather than pleasure and just for the sake of it.

The suggestion that, in a few generations, our experience of media will be reinvented shouldn't surprise us. We should, instead, marvel at the fact we ever read books at all. Great researchers such as Maryanne Wolf and Alison Gopnik remind us that the human brain was never designed to read. Rather, elements of the visual cortex – which evolved for other purposes – were hijacked in order to pull off the trick. The deep reading that a novel demands doesn't come easy and it was never "natural." Our default state is, if anything, one of distractedness. The gaze shifts, the attention flits; we scour the environment for clues. (Otherwise, that predator in the shadows might eat us.) How primed are we for distraction? One famous study found humans would rather give themselves electric shocks than sit alone with their thoughts for 10 minutes. We disobey those instincts every time we get lost in a book.

It's funny. We've such a connection with books, but for most of human history we've done without them:

Literacy has only been common (outside the elite) since the 19th century. And it's hardly been crystallized since then. Our habits of reading could easily become antiquated. The writer Clay Shirky even suggests that we've lately been "emptily praising" Tolstoy and Proust. Those old, solitary experiences with literature were "just a side-effect of living in an environment of impoverished access." In our online world, we can move on. And our brains – only temporarily hijacked by books – will now be hijacked by whatever comes next.

There's several theses in all of this around fake news, the role of reading in a democracy, and how information spreads. For now, I continue to be amazed at the power of the web on the fabric of societies.

Source: The Globe and Mail

Issue #292: Is there a cure for Tasmania? 🇦🇺

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Does the world need interactive emails?

I’m on the fence on this as, on the one hand, email is an absolute bedrock of the internet, a common federated standard that we can rely upon independent of technological factionalism. On the other hand, so long as it’s built into a standard others can adopt, it could be pretty cool.

The author of this article really doesn’t like Google’s idea of extending AMP (Accelerated Mobile Pages) to the inbox:

See, email belongs to a special class. Nobody really likes it, but it’s the way nobody really likes sidewalks, or electrical outlets, or forks. It not that there’s something wrong with them. It’s that they’re mature, useful items that do exactly what they need to do. They’ve transcended the world of likes and dislikes.
Fair enough, but as a total convert to Google's 'Inbox' app both on the web and on mobile, I don't think we can stop innovation in this area:
Emails are static because messages are meant to be static. The entire concept of communication via the internet is based around the telegraphic model of exchanging one-way packets with static payloads, the way the entire concept of a fork is based around piercing a piece of food and allowing friction to hold it in place during transit.
Are messages 'meant to be static'? I'm not so sure. Books were 'meant to' be paper-based until ebooks came along, and now there's all kinds of things we can do with ebooks that we can't do with their dead-tree equivalents.
Why do this? Are we running out of tabs? Were people complaining that clicking “yes” on an RSVP email took them to the invitation site? Were they asking to have a video chat window open inside the email with the link? No. No one cares. No one is being inconvenienced by this aspect of email (inbox overload is a different problem), and no one will gain anything by changing it.
Although it's an entertaining read, if 'why do this?' is the only argument the author, Devin Coldewey, has got against an attempted innovation in this space, then my answer would be why not? Although Coldewey points to the shutdown of Google Reader as an example of Google 'forcing' everyone to move to algorithmic news feeds, I'm not sure things are, and were, as simple as that.

It sounds a little simplistic to say so, but people either like and value something and therefore use it, or they don’t. We who like and uphold standards need to remember that, instead of thinking about what people and organisations should and shouldn’t do.

Source: TechCrunch

Does the world need interactive emails?

I’m on the fence on this as, on the one hand, email is an absolute bedrock of the internet, a common federated standard that we can rely upon independent of technological factionalism. On the other hand, so long as it’s built into a standard others can adopt, it could be pretty cool.

The author of this article really doesn’t like Google’s idea of extending AMP (Accelerated Mobile Pages) to the inbox:

See, email belongs to a special class. Nobody really likes it, but it’s the way nobody really likes sidewalks, or electrical outlets, or forks. It not that there’s something wrong with them. It’s that they’re mature, useful items that do exactly what they need to do. They’ve transcended the world of likes and dislikes.
Fair enough, but as a total convert to Google's 'Inbox' app both on the web and on mobile, I don't think we can stop innovation in this area:
Emails are static because messages are meant to be static. The entire concept of communication via the internet is based around the telegraphic model of exchanging one-way packets with static payloads, the way the entire concept of a fork is based around piercing a piece of food and allowing friction to hold it in place during transit.
Are messages 'meant to be static'? I'm not so sure. Books were 'meant to' be paper-based until ebooks came along, and now there's all kinds of things we can do with ebooks that we can't do with their dead-tree equivalents.
Why do this? Are we running out of tabs? Were people complaining that clicking “yes” on an RSVP email took them to the invitation site? Were they asking to have a video chat window open inside the email with the link? No. No one cares. No one is being inconvenienced by this aspect of email (inbox overload is a different problem), and no one will gain anything by changing it.
Although it's an entertaining read, if 'why do this?' is the only argument the author, Devin Coldewey, has got against an attempted innovation in this space, then my answer would be why not? Although Coldewey points to the shutdown of Google Reader as an example of Google 'forcing' everyone to move to algorithmic news feeds, I'm not sure things are, and were, as simple as that.

It sounds a little simplistic to say so, but people either like and value something and therefore use it, or they don’t. We who like and uphold standards need to remember that, instead of thinking about what people and organisations should and shouldn’t do.

Source: TechCrunch

The Kano model

Using the example of the innovation of a customised home page from the early days of Flickr, this article helps break down how to delight users:

Years ago, we came across the work of Noriaka Kano, a Japanese expert in customer satisfaction and quality management. In studying his writing, we learned about a model he created in the 1980s, known as the Kano Model.
The article does a great job of explaining how you can implement great features but they don't particularly get users excited:
Capabilities that users expect will frustrate those users when they don’t work. However, when they work well, they don’t delight those users. A basic expectation, at best, can reach a neutral satisfaction a point where it, in essence, becomes invisible to the user.

Try as it might, Google’s development team can only reduce the file-save problems to the point of it working 100% of the time. However, users will never say, “Google Docs is an awesome product because it saves my documents so well.” They just expect files to always be saved correctly.

So it’s a process of continual improvement, and marginal gains in some areas:

One of the predictions that the Kano Model makes is that once customers become accustomed to excitement generator features, those features are not as delightful. The features initially become part of the performance payoff and then eventually migrate to basic expectations.
Lots to think about here, particularly with Project MoodleNet.

Source: UIE

Is the gig economy the mass exploitation of millennials?

The answer is, “yes, probably”.

If the living wage is a pay scale calculated to be that of an appropriate amount of money to pay a worker so they can live, how is it possible, in a legal or moral sense to pay someone less? We are witnessing a concerted effort to devalue labour, where the primary concern of business is profit, not the economic wellbeing of its employees.

The 'sharing economy' and 'gig economy' are nothing of the sort. They're a problematic and highly disingenuous way for employers to not care about the people who create value in their business.

The employer washes their hands of the worker. Their immediate utility is the sole concern. From a profit point of view, absolutely we can appreciate the logic. However, we forget that the worker also exists as a member of society, and when business is allowed to use and exploit people in this manner, we endanger societal cohesiveness.

The problem, of course, is late-stage capitalism:

The neoliberal project has encouraged us to adopt a hyper-individualistic approach to life and work. For all the speak of teamwork, in this economy the individual reigns supreme and it is destroying young workers. The present system has become unfeasible. The neoliberal project needs to be reeled back in. The free market needs a firm hand because the invisible one has lost its grip.

And the alternative? Co-operation.

Source: The Irish Times

Humans are not machines

Can we teach machines to be ‘fully human’? It’s a fascinating question, as it makes us think carefully about what it actually means to be a human being.

Humans aren’t just about inputs and outputs. There’s some things that we ‘know’ in different ways. Take music, for example.

In philosophy, it’s common to describe the mind as a kind of machine that operates on a set of representations, which serve as proxies for worldly states of affairs, and get recombined ‘offline’ in a manner that’s not dictated by what’s happening in the immediate environment. So if you can’t consciously represent the finer details of a guitar solo, the way is surely barred to having any grasp of its nuances. Claiming that you have a ‘merely visceral’ grasp of music really amounts to saying that you don’t understand it at all. Right?
There's activities we do and actions we peform that aren't the result of conscious thought. What status do we give them?
Getting swept up in a musical performance is just one among a whole host of familiar activities that seem less about computing information, and more about feeling our way as we go: selecting an outfit that’s chic without being fussy, avoiding collisions with other pedestrians on the pavement, or adding just a pinch of salt to the casserole. If we sometimes live in the world in a thoughtful and considered way, we go with the flow a lot, too.
What sets humans apart from animals is the ability to plan and to pay attention to absract things and ideas:
Now, the world contains many things that we can’t perceive. I am unlikely to find a square root in my sock drawer, or to spot the categorical imperative lurking behind the couch. I can, however, perceive concrete things, and work out their approximate size, shape and colour just by paying attention to them. I can also perceive events occurring around me, and get a rough idea of their duration and how they relate to each other in time. I hear that the knock at the door came just before the cat leapt off the couch, and I have a sense of how long it took for the cat to sidle out of the room.
Time is one of the most abstract of the day-to-day things we deal with as humans:
Our conscious experience of time is philosophically puzzling. On the one hand, it’s intuitive to suppose that we perceive only what’s happening rightnow. But on the other, we seem to have immediate perceptual experiences of motion and change: I don’t need to infer from a series of ‘still’ impressions of your hand that it is waving, or work out a connection between isolated tones in order to hear a melody. These intuitions seem to contradict each other: how can I perceive motion and change if I am only really conscious of what’s occurring now? We face a choice: either we don’t really perceive motion and change, or the now of our perception encompasses more than the present instant – each of which seems problematic in its own way. Philosophers such as Franz Brentano and Edmund Husserl, as well as a host of more recent commentators, have debated how best to solve the dilemma.
So where does that leave us in terms of the differences between humans and machines?
Human attempts at making sense of the world often involve representing, calculating and deliberating. This isn’t the kind of thing that typically goes on in the 55 Bar, nor is it necessarily happening in the Lutheran church just down the block, or on a muddy football pitch in a remote Irish village. But gathering to make music, play games or engage in religious worship are far from being mindless activities. And making sense of the world is not necessarily just a matter of representing it.
To me, that last sentence is key: the world isn't just representations. It's deeper and more visceral than that.

Source: Aeon

Legislating against manipulated 'facts' is a slippery slope

In this day and age it’s hard to know who to trust. I was raised to trust in authority but was particularly struck when I did a deep-dive into Vinay Gupta’s blog about the state being special only because it holds a monopoly on (legal) violence.

As an historian, I’m all too aware of the times that the state (usually represented by a monarch) has served to repress its citizens/subjects. It at least could pretend that it was protecting the majority of the people. As this article states:

Lies masquerading as news are as old as news itself. What is new today is not fake news but the purveyors of such news. In the past, only governments and powerful figures could manipulate public opinion. Today, it’s anyone with internet access. Just as elite institutions have lost their grip over the electorate, so their ability to act as gatekeepers to news, defining what is and is not true, has also been eroded.
So in the interaction between social networks such as Facebook, Twitter, and Instagram on the one hand, and various governments on the other hand, both are interested in power, not the people. Or even any notion of truth, it would seem:
This is why we should be wary of many of the solutions to fake news proposed by European politicians. Such solutions do little to challenge the culture of fragmented truths. They seek, rather, to restore more acceptable gatekeepers – for Facebook or governments to define what is and isn’t true. In Germany, a new law forces social media sites to take down posts spreading fake news or hate speech within 24 hours or face fines of up to €50m. The French president, Emmanuel Macron, has promised to ban fake news on the internet during election campaigns. Do we really want to rid ourselves of today’s fake news by returning to the days when the only fake news was official fake news?
We need to be vigilant. Those we trust today may not be trustworthy tomorrow.

Source: The Guardian

Obvious

“Things always become obvious after the fact.”

(Nassim Nicholas Taleb)

Why we forget most of what we read

I read a lot of stuff, and I remember random bits of it. I used to be reasonably disciplined about bookmarking stuff, but then realised I hardly ever went back through my bookmarks. So, instead, I try to use what I read, which is kind of the reason for Thought Shrapnel…

Surely some people can read a book or watch a movie once and retain the plot perfectly. But for many, the experience of consuming culture is like filling up a bathtub, soaking in it, and then watching the water run down the drain. It might leave a film in the tub, but the rest is gone.
Well, indeed. Nice metaphor.
In the internet age, recall memory—the ability to spontaneously call information up in your mind—has become less necessary. It’s still good for bar trivia, or remembering your to-do list, but largely, [Jared Horvath, a research fellow at the University of Melbourne] says, what’s called recognition memory is more important. “So long as you know where that information is at and how to access it, then you don’t really need to recall it,” he says.
Exactly. You need to know how to find that article you read that backs up the argument you're making. You don't need to remember all of the details. Search skills are really important.

One study showed that recalling details about episodes for those bingeing on Netflix series was much lower than for thoose who spaced them out. I guess that’s unsurprising.

People are binging on the written word, too. In 2009, the average American encountered 100,000 words a day, even if they didn’t “read” all of them. It’s hard to imagine that’s decreased in the nine years since. In “Binge-Reading Disorder,” an article for The Morning News, Nikkitha Bakshani analyzes the meaning of this statistic. “Reading is a nuanced word,” she writes, “but the most common kind of reading is likely reading as consumption: where we read, especially on the internet, merely to acquire information. Information that stands no chance of becoming knowledge unless it ‘sticks.’”
For anyone who knows about spaced learning, the conclusions are pretty obvious:
The lesson from his binge-watching study is that if you want to remember the things you watch and read, space them out. I used to get irritated in school when an English-class syllabus would have us read only three chapters a week, but there was a good reason for that. Memories get reinforced the more you recall them, Horvath says. If you read a book all in one stretch—on an airplane, say—you’re just holding the story in your working memory that whole time. “You’re never actually reaccessing it,” he says.
So apply what you learn and you're putting it to work. Hence this post!

Source: The Atlantic (via e180)

Should you lower your expectations?

“Aim for the stars and maybe you’ll hit the treetops” was always the kind of advice I was given when I was younger. But extremely high expectations of oneself is not always a great thing. We have to learn that we’ve got limits. Some are physical, some are mental, and some are cultural:

The problem with placing too much emphasis on your expectations—especially when they are exceedingly high—is that if you don’t meet them, you’re liable to feel sad, perhaps even burned out. This isn’t to say that you shouldn’t strive for excellence, but there’s wisdom in not letting perfect be the enemy of good.
A (now famous) 2006 study found that people in Denmark are the happiest in the world. Researchers also found that have remarkably low expectations. And then:
In a more recent study that included more than 18,000 participants and was published in 2014 in the Proceedings of the National Academy of Sciences, researchers from University College in London examined people’s happiness from moment to moment. They found that “momentary happiness in response to outcomes of a probabilistic reward task is not explained by current task earnings, but by the combined influence of the recent reward expectations and prediction errors arising from those expectations.” In other words: Happiness at any given moment equals reality minus expectations.
So if you've always got very high expectations that aren't being met, that's not a great situation to be in
In the words of Jason Fried, founder and CEO of software company Basecamp and author of multiple books on workplace performance: “I used to set expectations in my head all day long. But constantly measuring reality against an imagined reality is taxing and tiring, [and] often wrings the joy out of experiencing something for what it is.”
Source: Outside

Trust

“The best way to find out if you can trust somebody is to trust them."

(Ernest Hemingway)

Why do some things go viral?

I love internet memes and included a few in my TEDx talk a few years ago. The term ‘meme’ comes from Richard Dawkins who coined the term in the 1970s:

But trawling the Internet, I found a strange paradox: While memes were everywhere, serious meme theory was almost nowhere. Richard Dawkins, the famous evolutionary biologist who coined the word “meme” in his classic 1976 book, The Selfish Gene, seemed bent on disowning the Internet variety, calling it a “hijacking” of the original term. The peer-reviewed Journal of Memetics folded in 2005. “The term has moved away from its theoretical beginnings, and a lot of people don’t know or care about its theoretical use,” philosopher and meme theorist Daniel Dennett told me. What has happened to the idea of the meme, and what does that evolution reveal about its usefulness as a concept?
Memes aren't things that you necessarily want to find engaging or persuasive. They're kind of parasitic on the human mind:
Dawkins’ memes include everything from ideas, songs, and religious ideals to pottery fads. Like genes, memes mutate and evolve, competing for a limited resource—namely, our attention. Memes are, in Dawkins’ view, viruses of the mind—infectious. The successful ones grow exponentially, like a super flu. While memes are sometimes malignant (hellfire and faith, for atheist Dawkins), sometimes benign (catchy songs), and sometimes terrible for our genes (abstinence), memes do not have conscious motives. But still, he claims, memes parasitize us and drive us.
Dawkins doesn't like the use of the word 'meme' to refer to what we see on the internet:
According to Dawkins, what sets Internet memes apart is how they are created. “Instead of mutating by random chance before spreading by a form of Darwinian selection, Internet memes are altered deliberately by human creativity,” he explained in a recent video released by the advertising agency Saatchi & Saatchi. He seems to think that the fact that Internet memes are engineered to go viral, rather than evolving by way of natural selection, is a salient difference that distinguishes from other memes—which is arguable, since what catches fire on the Internet can be as much a product of luck as any unexpected mutation.
So... why should we care?
While entertaining bored office workers seems harmless enough, there is something troubling about a multi-million dollar company using our minds as petri dishes in which to grow its ideas. I began to wonder if Dawkins was right—if the term meme is really being hijacked, rather than mindlessly evolving like bacteria. The idea of memes “forces you to recognize that we humans are not entirely the center of the universe where information is concerned—we’re vehicles and not necessarily in charge,” said James Gleick, author of The Information: A History, A Theory, A Flood, when I spoke to him on the phone. “It’s a humbling thing.”
It is indeed a humbling thing, but one that a the study of Philosphy prepares you for, particularly Stoicism. Your mind is the one thing you can control, so be careful out there on the internet, reader.

Source: Nautilus

Humans responsible for the Black Death

I taught History for years, and when I was teaching the Black Death, I inculcated the received wisdom that it was rats that were responsible for the spread of disease.

But a team from the universities of Oslo and Ferrara now says the first, the Black Death, can be "largely ascribed to human fleas and body lice".

The study, in the Proceedings of the National Academy of Science, uses records of its pattern and scale.

There are three candidates for the spread of the Black Death: rats, air, and lice/fleas:

[Prof Nils Stenseth, from the University of Oslo] and his colleagues... simulated disease outbreaks in [nine European] cities, creating three models where the disease was spread by:
  • rats
  • airborne transmission
  • fleas and lice that live on humans and their clothes
In seven out of the nine cities studied, the "human parasite model" was a much better match for the pattern of the outbreak.

It mirrored how quickly it spread and how many people it affected.

“The conclusion was very clear,” said Prof Stenseth. “The lice model fits best."

Apologies to all those I taught the incorrect cause! I hope it hasn’t affected you too much in later life…

Source: BBC News

The world's most nutritious foods

The older I get, the more important (and the more immediately apparent) the health benefits from eating and exercising well.

This article reports on scientists studying 1,000 different foods for their health benefits:

Scientists studied more than 1,000 foods, assigning each a nutritional score. The higher the score, the more likely each food would meet, but not exceed your daily nutritional needs, when eaten in combination with others.
The top ones?
  1. Almonds
  2. Cherimoya
  3. Ocean perch
  4. Flatfish
  5. Chia seeds
  6. Pumpkin seeds
  7. Swiss chard
  8. Pork fat
  9. Beet greens
  10. Snapper
Ever since reading of the value of almonds to non-meat eaters in The 4-Hour Body, I've taken a big bag of them on every trip. I also have some in a jar on my desk at home. As for the others on the list, some (pork fat!) are out of the question, and some (cherimoya) I've never come across.

Time for some more experimentation…

Source: BBC Future

Audio Adversarial speech-to-text

I don’t usually go in for detailed technical papers on stuff that’s not directly relevant to what I’m working on, but I made an exception for this. Here’s the abstract:

We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose (at a rate of up to 50 characters per second). We apply our white-box iterative optimization-based attack to Mozilla’s implementation DeepSpeech end-to-end, and show it has a 100% success rate. The feasibility of this attack introduce a new domain to study adversarial examples.
In other words, the researchers managed to fool a neural network devoted to speech recognition into transcribing a phrase different to that which was uttered.

So how does it work?

By starting with an arbitrary waveform instead of speech (such as music), we can embed speech into audio that should not be recognized as speech; and by choosing silence as the target, we can hide audio from a speech-to-text system
The authors state that merely changing words so that something different occurs is a standard adverserial attack. But a targeted adverserial attack is different:
Not only are we able to construct adversarial examples converting a person saying one phrase to that of them saying a different phrase, we are also able to begin with arbitrary non-speech audio sample and make that recognize as any target phrase.
This kind of stuff is possible due to open source projects, in particular Mozilla Common Voice. Great stuff.  

Source: Arxiv

Sounds and smells can help reinforce learning while you sleep

Apparently, the idea of learning while you sleep is actually bollocks, at least the way we have come to believe it works:

It wasn’t until the 1950s that researchers discovered the touted effects of hypnopaedia were actually not due to sleep at all. Instead these contraptions were actually awakening people. The debunkers could tell by using a relatively established technique called electroencephalography (EEG), which records the brain’s electrical signals through electrodes placed on the scalp. Using EEG on their participants, researchers could tell that the sleep-learners were actually awake (something we still do in research today), and this all but ended research into sleep as a cognitive tool. 50 years later, we now know it is possible to alter memory during sleep, just in a different way than previously expected.
However, and fascinatingly, sounds (not words) and smells can reinforce learning:
In 2007, the neuroscientist Björn Rasch at Lübeck University and colleagues reported that smells, which were associated with previously learned material, could be used to cue the sleeping brain. The study authors had taught participants the locations of objects on a grid, just like in the game Concentration, and exposed them to the odour of roses as they did so. Next, participants slept in the lab, and the experimenters waited until the deepest stage of sleep (slow-wave sleep) to once again expose them to the odour. Then when they were awake, the participants were significantly better at remembering where the objects were located. This worked only if they had been exposed to the rose odour during learning, and had smelled it during slow-wave sleep. If they were exposed to the odour only while awake or during REM sleep, the cue didn’t work.
Pretty awesome. There are some things still to research:
Outstanding questions that we have yet to address include: does this work for foreign-language learning (ie, grammar learning), or just learning foreign vocabulary? Could it be used to help maintain memory performance in an ageing population? Does reactivating some memories mean that others are wiped away even more quickly?
Worth trying!

Source: Aeon

Every easy thing is hard again

Although he isn’t aware, it was Frank Chimero who came up with the name Thought Shrapnel in a throwaway comment he made on his blog a while back. I immediately registered the domain name.

In this article, a write-up of a talk he’s been giving recently, Chimero talks about getting back into web design after a few years away founding a company.

This past summer, I gave a lecture at a web conference and afterward got into a fascinating conversation with a young digital design student. It was fun to compare where we were in our careers. I had fifteen years of experience designing for web clients, she had one year, and yet some how, we were in the same situation: we enjoyed the work, but were utterly confused and overwhelmed by the rapidly increasing complexity of it all. What the hell happened? (That’s a rhetorical question, of course.)
Look at the image at the top of this post, one that Chimero uses in his talk. He explains:
There are similar examples of the cycle in other parts of how websites get designed and made. Nothing stays settled, so of course a person with one year of experience and one with fifteen years of experience can both be confused. Things are so often only understood by those who are well-positioned in the middle of the current wave of thought. If you’re before the sweet spot in the wave, your inexperience means you know nothing. If you are after, you will know lots of things that aren’t applicable to that particular way of doing things. I don’t bring this up to imply that the young are dumb or that the inexperienced are inept—of course they’re not. But remember: if you stick around in the industry long enough, you’ll get to feel all three situations.
The current way of working, he suggests, may be powerful, but it's overly-complex for most of his work
It was easy to back away from most of this new stuff when I realized I have alternate ways of managing complexity. Instead of changing my tools or workflow, I change my design. It’s like designing a house so it’s easy to build, instead of setting up cranes typically used for skyscrapers.
Chimero makes an important point about the 'legibility' of web projects, a word I've also been using recently about my own work. I want to make it as understandable as possible:
Illegibility comes from complexity without clarity. I believe that the legibility of the source is one of the most important properties of the web. It’s the main thing that keeps the door open to independent, unmediated contributions to the network. If you can write markup, you don’t need Medium or Twitter or Instagram (though they’re nice to have). And the best way to help someone write markup is to make sure they can read markup.
He includes a great video showing a real life race between a tortoise and a hare. He points out that the tortoise wins because the hare becomes distracted:

www.youtube.com/watch

He finishes with some powerful words:

As someone who has decades of experience on the web, I hate to compare myself to the tortoise, but hey, if it fits, it fits. Let’s be more like that tortoise: diligent, direct, and purposeful. The web needs pockets of slowness and thoughtfulness as its reach and power continues to increase. What we depend upon must be properly built and intelligently formed. We need to create space for complexity’s important sibling: nuance. Spaces without nuance tend to gravitate towards stupidity. And as an American, I can tell you, there are no limits to the amount of damage that can be inflicted by that dangerous cocktail of fast-moving-stupid.
Source: Frank Chimero

Issue #291: Necessary koalafications 🐨

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Why good parents have naughty children

This made me smile, then it made me think. Our children are offspring of a current teacher and a former teacher. What difference does our structure and rules make to their happiness?

This article from the ongoing Book of Life compares and contrasts two families. The first is what would generally be regarded as a ‘good’ family, where the children are well-behaved and interactions pleasant. However:

In Family One the so-called good child has inside them a whole range of emotions that they keep out of sight not because they want to but because they don’t feel they have the option to be tolerated as they really are. They feel they can’t let their parents see if they are angry or fed up or bored because it seems as if the parents have no inner resources to cope with their reality; they must repress their bodily, coarser, more volatile selves. Any criticism of a grown up is (they imagine) so wounding and devastating that it can’t be uttered.
The second family is the opposite, but:
In Family Two the so-called bad child knows that things are robust. They feel they can tell their mother she’s a useless idiot because they know in their hearts that she loves them and that they love her and that a bout of irritated rudeness won’t destroy that. They know their father won’t fall apart or take revenge for being mocked. The environment is warm and strong enough to absorb the child’s aggression, anger, dirtiness or disappointment.
As a parent, I'm torn between, on the one hand wanting my children to be a bit rebellious. But, on the other hand, it's just really inconvenient when they are...
We should learn to see naughty children, a few chaotic scenes and occasional raised voices as belonging to health rather than delinquency – and conversely learn to fear small people who cause no trouble whatsoever. And, if we have occasional moments of happiness and well-being, we should feel especially grateful that there was almost certainly someone out there in the distant past who opted to look through the eyes of love at some deeply unreasonable and patently unpleasant behaviour from us.
Source: The Book of Life

Lost

“If you’re not lost, you’re not much of an explorer.” (John Perry Barlow)

Telegram cryptocurrency

I come across so many interesting links every day that I can only post a handful of them. Right now, and only a couple of months after starting this approach to Thought Shrapnel, I’ve got around 50 draft posts! This was one of them, from early January.

Telegram is great. I’ve been using it for the past couple of years with my wife, for the past year with my son and parents, and the past three months or so with Moodle. It’s an extremely useful platform, as it’s so quick to send messages. Reliable too, which my wife and I found Signal to struggle with sometimes.

The brothers behind Telegram made their billions from creating VKontakte (usually shortened to ‘VK’ and known as the ‘Russian Facebook’). They’ve announced that Telegram will raise millions of dollars through an ‘ICO’ or Initial Coin Offering. This uses similar terminology to an Initial Public Offering, or IPO, which comes through a company becoming publicly listed on a stock exchange. An ICO, on the other hand, is actually more like equity crowdfunding using cryptocurrency:

Encrypted messaging startup Telegram plans to launch its own blockchain platform and native cryptocurrency, powering payments on its chat app and beyond. According to multiple sources which have spoken to TechCrunch, the “Telegram Open Network” (TON) will be a new, ‘third generation’ blockchain with superior capabilities, after Bitcoin and, later, Ethereum paved the way.

It could lead to some quite exciting features:
With cryptocurrency powered payments inside Telegram, users could bypass remittance fees when sending funds across international borders, move sums of money privately thanks to the app’s encryption, deliver micropayments that would incur too high of credit card fees, and more. Telegram is already the de facto communication channel for the global cryptocurrency community, making a natural home to its own coin and Blockchain.
Whereas the major social networks kowtow to governmental demands around censorship, that doesn't seem to be the gameplan for Telegram:
Moving to a decentralized blockchain platform could kill two birds with one stone for Telegram. As well as creating a full-blown cryptocurrency economy inside the app, it would also insulate it against the attacks and accusations of nation-states such as Iran, where it now accounts for 40% of Iran’s internet traffic but was temporarily blocked amongst nationwide protests against the government.
I don't pretend to understand the white paper they've published, but:
The claim is that it will be capable of a vastly superior number of transactions, around 1 million per second. In other words, similar to the ambitions of the Polkadot project out of Berlin — but with an installed base of 180 million people. This makes it an ‘interchain’ with so-called ‘dynamic sharding’.
Exciting times. As I was explaining to someone recently, Telegram are taking a very interesting route into user adoption. They couldn't go with the standard 'social network' approach as Facebook, Instagram, and Twitter mean that market is effectively saturated. Instead, they started with a messaging app, and are building out from there.

Source: TechCrunch

Rock piles and cathedrals

“A rock pile ceases to be a rock pile the moment a single man contemplates it, bearing within him the image of a cathedral.”

(Antoine de Saint-Exupery)

Platform censorship and the threat to democracy

TorrentFreak reports that Science Hub (commonly referred to as ‘Sci-Hub’) has had its account with Cloudflare terminated. Sci-Hub is sometimes known as ‘the Piratebay of Science’ as, in the words of Wikipedia, it “bypasses publisher paywalls by allowing access through educational institution proxies”:

Cloudflare’s actions are significant because the company previously protested a similar order. When the RIAA used the permanent injunction in the MP3Skull case to compel Cloudflare to disconnect the site, the CDN provider refused.

The RIAA argued that Cloudflare was operating “in active concert or participation” with the pirates. The CDN provider objected, but the court eventually ordered Cloudflare to take action, although it did not rule on the “active concert or participation” part.

In the Sci-Hub case “active concert or participation” is also a requirement for the injunction to apply. While it specifically mentions ISPs and search engines, ACS Director Glenn Ruskin previously stressed that companies won’t be targeted for simply linking users to Sci-Hub.

Cloudflare is a Content Delivery Network (CDN), and I use their service on my sites, to improve web performance and security. They are the subject of some controversy at the moment, as the Electronic Frontier Foundation note:

From Cloudflare’s headline-making takedown of the Daily Stormer last autumn to YouTube’s summer restrictions on LGBTQ content, there's been a surge in “voluntary” platform censorship. Companies—under pressure from lawmakers, shareholders, and the public alike—have ramped up restrictions on speech, adding new rules, adjusting their still-hidden algorithms and hiring more staff to moderate content. They have banned ads from certain sources and removed “offensive” but legal content.
It's a big deal, as intermediaries that are required for the optimisation in speed of large website succumb to political pressure.
Given this history, we’re worried about how platforms are responding to new pressures. Not because there’s a slippery slope from judicious moderation to active censorship — but because we are already far down that slope. Regulation of our expression, thought, and association has already been ceded to unaccountable executives and enforced by minimally-trained, overworked staff, and hidden algorithms. Doubling down on this approach will not make it better. And yet, no amount of evidence has convinced the powers that be at major platforms like Facebook—or in governments around the world. Instead many, especially in policy circles, continue to push for companies to—magically and at scale—perfectly differentiate between speech that should be protected and speech that should be erased.
We live in contentious times, which are setting the course for a digitally mediate future. For every positive development (such as GDPR), there's stuff like this...

Sources: TorrentFreak / EFF

Decentralisation is the only way to wean people off capitalist social media

Everyone wants ‘decentralisation’ these days, whether it’s the way we make payments, or… well, pretty much anything that can be put on a blockchain.

But what does that actually mean in practice? What, as William James would say, is the ‘cash value’ of decentralisation? This article explores some of that:

Decentralization is a pretty vague buzzword. Vitalik considered its meaning a year ago. In my estimation, it can mean a couple of things:
  1. Abstract principle when analyzing general power structures of any kind: "Political decentralization" means spreading political power among differing entities. "Market decentralization" refers to outcomes being produced without being coordinated by a central authority. It's a philosophical idea that can be interpreted broadly in a lot of different contexts.
  2. Bitcoin, mostly. Lots of credit for the buzzword's current popularity traces back to cryptocurrencies and blockchains, and I think the term "decentralization" without context is rightfully claimed by the yescoiners and defer to Vitalik's interpretation for its meaning. I call this "financial decentralization" in contexts where my definition is dominant.
  3. A second, specific implementation of (1) that I want to talk about.
The author goes on to discuss a specific problem around social networking that decentralisation can solve:
Fundamentally, the problem with the web ecosyste

m is that consumer choice is limited. Facebook, Twitter, Google, and other tech giants “own” a large part of the social graph that both powers the core digital connection goodness and sustains the momentum that they will keep owning it, due to something called Metcalfe’s law. If you want to connect to people on the internet, you have to play by their rules.

So what can we do?

A "web decentralized" system looks like thus. You start with bare-bones replicas of social networking, publishing, microblogging, and chatting. You build a small social graph of your friends. This time, the data structures powering these applications live on your computer and are in a format you can easily grok and extend (Sorry, normies, it will be engineers-only for the next year or two).

[…]

The solution is technological standardization. Individuals, mostly engineers, need to expend a lot more effort contributing to the protocols and processes that drive inter-application communication. Your core Facebook identity – your username, your connections, your chat history – should be a universally standardized protocol with a Democracy-scale process for updating and extending it. Crucially, that process needs to be directed outside the direct control of tech companies, who are capitalistically bound to monopolize and direct control back to their domains.

It’s worth quoting the last paragraph:

Ultimately, decentralization is about shaping the the balance of power in digital domains. I for one would not like to wait around while the Tech overlords and Crusty regulators decide what happens with our digital lives. There's no reason for us to keep listening to either of them. A handful of dedicated engineers, designers, a organizers could implement the alternative today. And that's what web decentralization is all about.
Source: Clutch of the Dead Hand

Europe is being taken over by crayfish that can clone themselves

I was a teenager when Dolly the sheep was cloned. It made me wonder why evolution seemed to favour species producing offspring from two parents. Why don’t creatures just clone themselves?

Well, it turns out that a new species of crayfish is doing exactly that:

Before about 25 years ago, the species simply did not exist. A single drastic mutation in a single crayfish produced the marbled crayfish in an instant.

The mutation made it possible for the creature to clone itself, and now it has spread across much of Europe and gained a toehold on other continents. In Madagascar, where it arrived about 2007, it now numbers in the millions and threatens native crayfish.

It looks like the mutation may have occurred in a German aquarium, and owners just haven't known what to do with them:

For nearly two decades, marbled crayfish have been multiplying like Tribbles on the legendary “Star Trek” episode. “People would start out with a single animal, and a year later they would have a couple hundred,” said Dr. Lyko.

Many owners apparently drove to nearby lakes and dumped their marmorkrebs. And it turned out that the marbled crayfish didn’t need to be pampered to thrive. Marmorkrebs established growing populations in the wild, sometimes walking hundreds of yards to reach new lakes and streams. Feral populations started turning up in the Czech Republic, Hungary, Croatia and Ukraine in Europe, and later in Japan and Madagascar.

They're not likely to completely take over the earth, however. Having the same DNA, they have the same susceptibility to disease and changing environmental conditions:

There are a lot of clear advantages to being a clone. Marbled crayfish produce nothing but fertile offspring, allowing their populations to explode. “Asexuality is a fantastic short-term strategy,” said Dr. Tucker.

In the long term, however, there are benefits to sex. Sexually reproducing animals may be better at fighting off diseases, for example.

If a pathogen evolves a way to attack one clone, its strategy will succeed on every clone. Sexually reproducing species mix their genes together into new combinations, increasing their odds of developing a defense.

I'm not eating meat at the moment, but I am eating (shell)fish. So I'm imagining a sustainabile source of tasty, tasty crayfish...

Source: The New York Times

Alzheimer's is a kind of 'type 3' diabetes

My Great Aunt, who we were close to, developed Alzheimer’s Disease towards the end of her life. This article claims that scientific evidence points to a link between the condition and diabetes:

A longitudinal study, published Thursday in the journal Diabetologia, followed 5,189 people over 10 years and found that people with high blood sugar had a faster rate of cognitive decline than those with normal blood sugar—whether or not their blood-sugar level technically made them diabetic. In other words, the higher the blood sugar, the faster the cognitive decline.
And the reason?
Schilling posits this happens because of the insulin-degrading enzyme, a product of insulin that breaks down both insulin and amyloid proteins in the brain—the same proteins that clump up and lead to Alzheimer’s disease. People who don’t have enough insulin, like those whose bodies’ ability to produce insulin has been tapped out by diabetes, aren’t going to make enough of this enzyme to break up those brain clumps. Meanwhile, in people who use insulin to treat their diabetes and end up with a surplus of insulin, most of this enzyme gets used up breaking that insulin down, leaving not enough enzyme to address those amyloid brain clumps.
Really interesting, and another reason to avoid sugar and heavily-processed foods.

Source: The Atlantic

Puertopia

Dudes make millions (or billions) of dollars via cryptocurrency. Hurricane hits Puerto Rico. They decide to build a new state.

They call what they are building Puertopia. But then someone told them, apparently in all seriousness, that it translates to “eternal boy playground” in Latin. So they are changing the name: They will call it Sol.
Oops.

Puerto Rico offers an unparalleled tax incentive: no federal personal income taxes, no capital gains tax and favorable business taxes — all without having to renounce your American citizenship. For now, the local government seems receptive toward the crypto utopians; the governor will speak at their blockchain summit conference, called Puerto Crypto, in March.

Of course it does. But look at what they've got planned:

Some are open to the new wave as a welcome infusion of investment and ideas.

“We’re open for crypto business,” said Erika Medina-Vecchini, the chief business development officer for the Department of Economic Development and Commerce, in an interview at her office. She said her office was starting an ad campaign aimed at the new crypto expat boom, with the tagline “Paradise Performs.”

Others worry about the island’s being used for an experiment and talk about “crypto colonialism.” At a house party in San Juan, Richard Lopez, 32, who runs a pizza restaurant, Estella, in the town of Arecibo, said: “I think it’s great. Lure them in with taxes, and they’ll spend money.”

Andria Satz, 33, who grew up in Old San Juan and works for the Conservation Trust of Puerto Rico, disagreed.

“We’re the tax playground for the rich,” she said. “We’re the test case for anyone who wants to experiment. Outsiders get tax exemptions, and locals can’t get permits.”

Interesting times.

Source: The New York Times

Worth the risks?

“Decide whether or not the goal is worth the risks involved. If it is, stop worrying.” (Amelia Earhart)

Creating media, not just consuming it

My wife and I are fans of Common Sense Media, and often use their film and TV reviews when deciding what to watch as a family. In their newsletter, they had a link to an article about strategies to help kids create media, rather than just consume it:

Kids actually love to express themselves, but sometimes they feel like they don't have much of a voice. Encouraging your kid to be more of a maker might just be a matter of pointing to someone or something they admire and giving them the technology to make their vision come alive. No matter your kids' ages and interests, there's a method and medium to encourage creativity.
They link to apps for younger and older children, and break things down by what kind of kids you've got. It's a cliché, but nevertheless true, that every child is different. My son, for example, has just given up playing the piano, but loves making electronic music:
Most kids love music right out of the womb, so transferring that love into creation isn't hard when they're little. Banging on pots and pans is a good place to start -- but they can take that experience with them using apps that let them play around with sound. Little kids can start to learn about instruments and how sounds fit together into music. Whether they're budding musicians or just appreciators, older kids can use tools to compose, stay motivated, and practice regularly. And when tweens and teens want to start laying down some tracks, they can record, edit, and share their stuff.
The post is chock-full of links, so there's something for everyone. I'm delighted to be able to pair it with a recent image Amy shared in our Slack channel which lists the rules she has for her teenage daughter around screentime. I'd like to frame it for our house!

Source: Common Sense Media

Image: Amy Burvall (you can hire her)

GDPR, blockchain, and privacy

I’m taking an online course about the impending General Data Protection Regulatin (GDPR), which I’ve writing about on my personal blog. An article in WIRED talks about the potential it will have, along with technologies such as blockchain.

People have talked about everyone having ‘private data accounts’ which they then choose to hook up to service providers for years. GDPR might just force that to happen:

A new generation of apps and websites will arise that use private-data accounts instead of conventional user accounts. Internet applications in 2018 will attach themselves to these, gaining access to a smart data account rich with privately held contextual information such as stress levels (combining sleep patterns, for example, with how busy a user's calendar is) or motivation to exercise comparing historical exercise patterns to infer about the day ahead). All of this will be possible without the burden on the app supplier of undue sensitive data liability or any violation of consumers' personal rights.

As the article points out, when we know what's going to happen with our data, we're probably more likely to share it. For example, I'm much more likely to invest in voice-assisted technologies once GDPR hits in May:

Paradoxically, the internet will become more private at a moment when we individuals begin to exchange more data. We will then wield a collective economic power that could make 2018 the year we rebalance the digital economy.

This will have a huge effect on our everyday information landscape:

The more we share data on our terms, the more the internet will evolve to emulate the physical domain where private spaces, commercial spaces and community spaces can exist separately, but side by side. Indeed, private-data accounts may be the first step towards the internet as a civil society, paving the way for a governing system where digital citizens, in the form of their private micro-server data account, do not merely have to depend on legislation to champion their private rights, but also have the economic power to enforce them as well.

I have to say, the more I discover about the provisions of GDPR, the more excited and optimistic I am about the future.

Source: WIRED

Living in a dictatorship

The historian and social commentator in me found this fascinating. This article quotes Twitter user G. Willow Wilson (who claims to have liven in a dictatorship) as saying:

It’s a mistake to think a dictatorship feels intrinsically different on a day-to-day basis than a democracy does. I’ve lived in one dictatorship and visited several others—there are still movies and work and school and shopping and memes and holidays.

The difference is the steady disappearance of dissent from the public sphere. Anti-regime bloggers disappear. Dissident political parties are declared “illegal.” Certain books vanish from the libraries.

If you click through to the actual Twitter thread, Wilson continues:

The genius of a true, functioning dictatorship is the way it carefully titrates justice. Once in awhile it will allow a sound judicial decision or critical op-ed to bubble up. Rational discourse is never entirely absent. There is plausible deniability.
Of course this isn't a dictatorship. It's only a temporary state of affairs. And we're doing it for your benefit:
So if you're waiting for the grand moment when the scales tip and we are no longer a functioning democracy, you needn't bother. It'll be much more subtle than that. It'll be more of the president ignoring laws passed by congress. It'll be more demonizing of the press.
That's what concerns me when people say that they don't care about privacy and security. Technology can help with resistance to autocracy.

Source: Kottke.org

Culture is the behaviour you reward and punish

This is an interesting read on team and organisational culture in practice. Interesting choice of image, too (I’ve used a different one).

Compensation helps very little when it comes to aligning culture, because it’s private. Public rewards are much more influential. Who gets promoted, or hangs out socially with the founders? Who gets the plum project, or a shout-out at the company all-hands? Who gets marginalized on low-value projects, or worse, fired? What earns or derails the job offer when interview panels debrief? These are powerful signals to our teammates, and they’re imprinting on every bit of it.

In my mind, organisational culture is a lot like family dynamics, especially the parenting part. After all, kids follow what you do rather than what you say.

When role models are consistent, everyone gets the message, and they align towards that expectation even if it wasn’t a significant part of their values system before joining the company. That’s how culture gets reproduced, and how we assimilate new co-workers who don’t already possess our values.

People stop taking values seriously when the public rewards (and consequences) don’t match up. We can say that our culture requires treating each other with respect, but all too often, the openly rude high performer is privately disciplined, but keeps getting more and better projects. It doesn’t matter if you docked his bonus or yelled at him in private. When your team sees unkind people get ahead, they understand that the real culture is not one of kindness.

Culture eats strategy for breakfast, yet most organisations I've worked with and for don't spend nearly enough time on it.

Culture is powerful. It makes teams highly functional and gives meaning to our work. It’s essential for organizational scale because culture enables people to make good decisions without a lot of oversight. But ironically, culture is particularly vulnerable when you are growing quickly. If newcomers get guidance from teammates and leaders who aren’t assimilated themselves, your company norms don’t have a chance to reproduce. If rewards like stretch projects and promotions are handed out through battlefield triage, there’s no consistency to your value system.

When you strip away everything else, all you've got are your principles and values. I think most organisations (and people) would do well to remember that.

Source: Jocelyn Goldfein (via Offscreen Magazine)

Culture is the behaviour you reward and punish

This is an interesting read on team and organisational culture in practice. Interesting choice of image, too (I’ve used a different one).

Compensation helps very little when it comes to aligning culture, because it’s private. Public rewards are much more influential. Who gets promoted, or hangs out socially with the founders? Who gets the plum project, or a shout-out at the company all-hands? Who gets marginalized on low-value projects, or worse, fired? What earns or derails the job offer when interview panels debrief? These are powerful signals to our teammates, and they’re imprinting on every bit of it.

In my mind, organisational culture is a lot like family dynamics, especially the parenting part. After all, kids follow what you do rather than what you say.

When role models are consistent, everyone gets the message, and they align towards that expectation even if it wasn’t a significant part of their values system before joining the company. That’s how culture gets reproduced, and how we assimilate new co-workers who don’t already possess our values.

People stop taking values seriously when the public rewards (and consequences) don’t match up. We can say that our culture requires treating each other with respect, but all too often, the openly rude high performer is privately disciplined, but keeps getting more and better projects. It doesn’t matter if you docked his bonus or yelled at him in private. When your team sees unkind people get ahead, they understand that the real culture is not one of kindness.

Culture eats strategy for breakfast, yet most organisations I've worked with and for don't spend nearly enough time on it.

Culture is powerful. It makes teams highly functional and gives meaning to our work. It’s essential for organizational scale because culture enables people to make good decisions without a lot of oversight. But ironically, culture is particularly vulnerable when you are growing quickly. If newcomers get guidance from teammates and leaders who aren’t assimilated themselves, your company norms don’t have a chance to reproduce. If rewards like stretch projects and promotions are handed out through battlefield triage, there’s no consistency to your value system.

When you strip away everything else, all you've got are your principles and values. I think most organisations (and people) would do well to remember that.

Source: Jocelyn Goldfein (via Offscreen Magazine)

Are cows less valuable than wolves?

When debating with people, one of my go-to approaches is getting them to think through the logical consequences of their actions. Effectively, I’m a serial invoker of Kant’s categorical imperative: what would happen if everyone acted like this?

This article gets people to think about a world full of vegans:

Vegetarianism and veganism are becoming more popular. Alternative sources of protein, including lab-grown meat, are becoming available. This trend away from farmed meat-eating looks set to continue. From an environmental perspective and a welfare perspective, that’s a good thing. But how far should we go? Would it be good if the last cow died?
Well, let's think it through...
There is a distinct difference between cattle on the one hand, and pandas and wolves on the other. Modern cattle owe their existence to selective breeding by human beings: they are very different animals from the wild oxen from which they are descended. We might think that this difference is relevant to their moral value. We might think, that is, along the following lines: we have a duty to preserve the natural world as far as we can. Wolves and pandas belong to that natural world; they occupy their place in it due to the mechanisms of evolution. So we have a duty to preserve them (not an absolute duty of course: rather one duty among many others – to our children, to each other, and so on – each of which makes different and sometimes conflicting demands on us).
Right, so that's quite complex.
If we think, as I do, that being cultural is itself an adaptation, a natural feature of human beings, then we shouldn’t think that the ways in which we are cultural exempt us from nature, or that the products of our culture are themselves unnatural.
In other words, we should put to one side our status of mammals at the top of the food chain when thinking about this stuff. Fascinating.

Source: Aeon

How we get influence backwards

Austin Kleon reflects on the following quotation from Jean-Michel Basquiat:

You’ve got to realize that influence is not influence. It’s simply someone’s idea going through my new mind.
In other words, the person who's doing the influencing doesn't know they're doing the influencing. We say that an artist or writer was influenced by someone, but that's the wrong way around:
When we say, “Basquiat was influenced by Van Gogh,” that isn’t really correct, because it implies that Van Gogh is doing something to Basquiat, when actually the opposite is true.
Kleon continues to quote K.K. Ruthven:
Our understanding of literary ‘influence’ is obstructed by the grammar of our language, which puts things back to front in obliging us to speak in passive terms of the one who is the active partner in the relationship: to say that Keats influenced Wilde is not only to credit Keats with an activity of which he was innocent, but also to misrepresent Wilde by suggesting he merely submitted to something he obviously went out of his way to acquire. In matters of influence, it is the receptor who takes the initiative, not the emitter. When we say that Keats had a strong influence on Wilde, what we really mean is that Wilde was an assiduous reader of Keats, an inquisitive reader in the service of an acquisitive writer.
I like things that make me think differently about things I take for granted, especially ones that have been encoded into language.

Source: Austin Kleon

How we get influence backwards

Austin Kleon reflects on the following quotation from Jean-Michel Basquiat:

You’ve got to realize that influence is not influence. It’s simply someone’s idea going through my new mind.
In other words, the person who's doing the influencing doesn't know they're doing the influencing. We say that an artist or writer was influenced by someone, but that's the wrong way around:
When we say, “Basquiat was influenced by Van Gogh,” that isn’t really correct, because it implies that Van Gogh is doing something to Basquiat, when actually the opposite is true.
Kleon continues to quote K.K. Ruthven:
Our understanding of literary ‘influence’ is obstructed by the grammar of our language, which puts things back to front in obliging us to speak in passive terms of the one who is the active partner in the relationship: to say that Keats influenced Wilde is not only to credit Keats with an activity of which he was innocent, but also to misrepresent Wilde by suggesting he merely submitted to something he obviously went out of his way to acquire. In matters of influence, it is the receptor who takes the initiative, not the emitter. When we say that Keats had a strong influence on Wilde, what we really mean is that Wilde was an assiduous reader of Keats, an inquisitive reader in the service of an acquisitive writer.
I like things that make me think differently about things I take for granted, especially ones that have been encoded into language.

Source: Austin Kleon

Issue #290: Unscathed

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

The punk rock internet

This kind of article is useful in that it shows to a mainstream audience the benefits of a redecentralised web and resistance to Big Tech.

Balkan and Kalbag form one small part of a fragmented rebellion whose prime movers tend to be located a long way from Silicon Valley. These people often talk in withering terms about Big Tech titans such as Mark Zuckerberg, and pay glowing tribute to Edward Snowden. Their politics vary, but they all have a deep dislike of large concentrations of power and a belief in the kind of egalitarian, pluralistic ideas they say the internet initially embodied.

What they are doing could be seen as the online world’s equivalent of punk rock: a scattered revolt against an industry that many now think has grown greedy, intrusive and arrogant – as well as governments whose surveillance programmes have fuelled the same anxieties. As concerns grow about an online realm dominated by a few huge corporations, everyone involved shares one common goal: a comprehensively decentralised internet.

However, these kind of articles are very personality-driven, and the little asides made the article’s author paint those featured as a bit crazy and the whole idea as a bit far-fetched.

For example, here’s the section on a project which is doing some pretty advanced tech while avoiding venture capitalist money:

In the Scottish coastal town of Ayr, where a company called MaidSafe works out of a silver-grey office on an industrial estate tucked behind a branch of Topps Tiles, another version of this dream seems more advanced. MaidSafe’s first HQ, in nearby Troon, was an ocean-going boat. The company moved to an office above a bridal shop, and then to an unheated boatshed, where the staff sometimes spent the working day wearing woolly hats. It has been in its new home for three months: 10 people work here, with three in a newly opened office in Chennai, India, and others working remotely in Australia, Slovakia, Spain and China.
I get the need to bring technology alive for the reader, but what difference does it make that their office is behind Topps Tiles? So what if the staff sometimes wear woolly hats? It just makes the whole thing out to be farcical. Which of course, it's not.

Source: The Guardian

The punk rock internet

This kind of article is useful in that it shows to a mainstream audience the benefits of a redecentralised web and resistance to Big Tech.

Balkan and Kalbag form one small part of a fragmented rebellion whose prime movers tend to be located a long way from Silicon Valley. These people often talk in withering terms about Big Tech titans such as Mark Zuckerberg, and pay glowing tribute to Edward Snowden. Their politics vary, but they all have a deep dislike of large concentrations of power and a belief in the kind of egalitarian, pluralistic ideas they say the internet initially embodied.

What they are doing could be seen as the online world’s equivalent of punk rock: a scattered revolt against an industry that many now think has grown greedy, intrusive and arrogant – as well as governments whose surveillance programmes have fuelled the same anxieties. As concerns grow about an online realm dominated by a few huge corporations, everyone involved shares one common goal: a comprehensively decentralised internet.

However, these kind of articles are very personality-driven, and the little asides made the article’s author paint those featured as a bit crazy and the whole idea as a bit far-fetched.

For example, here’s the section on a project which is doing some pretty advanced tech while avoiding venture capitalist money:

In the Scottish coastal town of Ayr, where a company called MaidSafe works out of a silver-grey office on an industrial estate tucked behind a branch of Topps Tiles, another version of this dream seems more advanced. MaidSafe’s first HQ, in nearby Troon, was an ocean-going boat. The company moved to an office above a bridal shop, and then to an unheated boatshed, where the staff sometimes spent the working day wearing woolly hats. It has been in its new home for three months: 10 people work here, with three in a newly opened office in Chennai, India, and others working remotely in Australia, Slovakia, Spain and China.
I get the need to bring technology alive for the reader, but what difference does it make that their office is behind Topps Tiles? So what if the staff sometimes wear woolly hats? It just makes the whole thing out to be farcical. Which of course, it's not.

Source: The Guardian

The origin of the term 'open source'

I didn’t used to think that who came up with the name of a thing particularly mattered, nor how it came about.

I’ve changed my mind, however, as the history of these things also potentially tells you about their future. In this article, Christine Peterson outlines how she came up with the term ‘open source’:

The introduction of the term "open source software" was a deliberate effort to make this field of endeavor more understandable to newcomers and to business, which was viewed as necessary to its spread to a broader community of users. The problem with the main earlier label, "free software," was not its political connotations, but that—to newcomers—its seeming focus on price is distracting. A term was needed that focuses on the key issue of source code and that does not immediately confuse those new to the concept. The first term that came along at the right time and fulfilled these requirements was rapidly adopted: open source.
Tellingly, as it was the 1990s, Peterson let a man introduce it for the term to gain traction:
Toward the end of the meeting, the question of terminology was brought up explicitly, probably by Todd or Eric. Maddog mentioned "freely distributable" as an earlier term, and "cooperatively developed" as a newer term. Eric listed "free software," "open source," and "sourceware" as the main options. Todd advocated the "open source" model, and Eric endorsed this. I didn't say much, letting Todd and Eric pull the (loose, informal) consensus together around the open source name. It was clear that to most of those at the meeting, the name change was not the most important thing discussed there; a relatively minor issue. Only about 10% of my notes from this meeting are on the terminology question.
From this point, Tim O'Reilly had to agree and popularise it, but:
Coming up with a phrase is a small contribution, but I admit to being grateful to those who remember to credit me with it. Every time I hear it, which is very often now, it gives me a little happy twinge.

Source: opensource.com

Optimism

“Optimism is the faith that leads to achievement. Nothing can be done without hope and confidence.” (Helen Keller)

The Project Design Tetrahedron

I had reason this week to revisit Dorian Taylor’s interview on Uses This. I fell into a rabbithole of his work, and came across a lengthy post he wrote back in 2014.

I've given considerable thought throughout my career to the problem of resource management as it pertains to the development of software, and I believe my conclusions are generalizable to all forms of work which is dominated by the gathering, concentration, and representation of information, rather than the transportation and arrangement of physical stuff. This includes creative work like writing a novel, painting a picture, or crafting a brand or marketing message. Work like this is heavy on design or problem-solving, with negligible physical implementation overhead. Stuff-based work, by contrast, has copious examples in mature industries like construction, manufacturing, resource extraction and logistics.

As you can see in the image above, he argues that the traditional engineering approach of having things either:

  • Fast and Good
  • Cheap and Fast
  • Good and Cheap

...is wrong, given a lean and iterative design process. You can actually make things that are immediately useful (i.e. 'Good'), relatively Cheap, and do so Fast. The thing you sacrifice in those situations, and hence the 'tetrahedron' is Predictable Results.

If you can reduce a process to an algorithm, then you can make extremely accurate predictions about the performance of that algorithm. Considerably more difficult, however, is defining an algorithm for defining algorithms. Sure, every real-world process has well-defined parts, and those can indeed be subjected to this kind of treatment. There is still, however, that unknown factor that makes problem-solving processes unpredictable.

In other words, we live in an unpredictable world, but we can still do awesome stuff. Nassim Nicholas Taleb would be proud.

Source: Dorian Taylor

The Project Design Tetrahedron

I had reason this week to revisit Dorian Taylor’s interview on Uses This. I fell into a rabbithole of his work, and came across a lengthy post he wrote back in 2014.

I've given considerable thought throughout my career to the problem of resource management as it pertains to the development of software, and I believe my conclusions are generalizable to all forms of work which is dominated by the gathering, concentration, and representation of information, rather than the transportation and arrangement of physical stuff. This includes creative work like writing a novel, painting a picture, or crafting a brand or marketing message. Work like this is heavy on design or problem-solving, with negligible physical implementation overhead. Stuff-based work, by contrast, has copious examples in mature industries like construction, manufacturing, resource extraction and logistics.

As you can see in the image above, he argues that the traditional engineering approach of having things either:

  • Fast and Good
  • Cheap and Fast
  • Good and Cheap

...is wrong, given a lean and iterative design process. You can actually make things that are immediately useful (i.e. 'Good'), relatively Cheap, and do so Fast. The thing you sacrifice in those situations, and hence the 'tetrahedron' is Predictable Results.

If you can reduce a process to an algorithm, then you can make extremely accurate predictions about the performance of that algorithm. Considerably more difficult, however, is defining an algorithm for defining algorithms. Sure, every real-world process has well-defined parts, and those can indeed be subjected to this kind of treatment. There is still, however, that unknown factor that makes problem-solving processes unpredictable.

In other words, we live in an unpredictable world, but we can still do awesome stuff. Nassim Nicholas Taleb would be proud.

Source: Dorian Taylor

Promising everything

“Whoever promises everything, promises nothing, and promises are a trap for fools.” (Baltasar Gracián)

Designing social systems

This article is too long and written in a way that could be more direct, but it still makes some good points. Perhaps the best bit is the comparison of iOS lockscreen (left) with a redesigned one (right).

Most platforms encourage us to act against our values: less humbly, less honestly, less thoughtfully, and so on. Using these platforms while sticking to our values would mean constantly fighting their design. Unless we’re prepared for that fight, we’ll regret our choices.

When we're joining in with conversations online, then we're not always part of a group, sometimes we're part of a network. It seems to me like most of the points the author is making pertain to social networks like Facebook, as opposed to those like Twitter and Mastodon.

He does, however, make a good point about a shift towards people feeling they have to act in a particular way:

Groups are held together by a particular kind of conversation, which I’ll call wisdom. It’s a kind of conversation that people are starved for right now—even amidst nonstop communication, amidst a torrent of articles, videos, and posts.

When this type of conversation is missing, people feel that no one understands or cares about what’s important to them. People feel their values are unheeded and unrecognized.

[T]his situation is easy to exploit, and the media and fake news ecosystems have done just that. As a result, conversations become ideological and polarized, and elections are manipulated.

Tribal politics in social networks are caused by people not having strong offline affinity groups, so they seek their 'tribe' online.

If social platforms can make it easier to share our personal values (like small town living) directly, and to acknowledge one another and rally around them, we won’t need to turn them into ideologies or articles. This would do more to heal politics and media than any “fake news” initiative. To do this, designers need to know what this kind of conversation sounds like, how to encourage it, and how to avoid drowning it out.

Ultimately, the author has no answer and (wisely) turns to the community for help. I like the way he points to exercises we can do and groups we can form. I'm not sure it'll scale, though...

Source: Human Systems

Irony doesn't scale

Paul Ford is venerated in Silicon Valley and, based on what I’ve read of his, for good reason. He describes himself as a ‘reluctant capitalist’.

In this post from last year, he discusses building a positive organisational culture:

A lot of businesses, especially agencies, are sick systems. They make a cult of their “visionary” founders. And they keep going but never seem to thrive — they always need just one more lucky break before things improve. Payments are late. Projects are late. The phone rings all weekend. That’s not what we wanted to build. We wanted to thrive.
He sets out characteristics of a 'well system':
  • Hire people who like to work hard and who have something to prove.
  • Encourage people to own and manage large blocks of their own time, and give people time to think and make thinking part of the job—not extra.
  • Let people rest. Encourage them to go home at sensible times. If they work late give them time off to make up for it.
  • Aim for consistency. Set emotional boundaries and expectations, be clear about rewards, and protect people where possible from crises so they can plan their time.
  • Make their success their own and credit them for it.
  • Don’t promise happiness. Promise fair pay and good work.
Ford makes the important point that leaders need to be seen to do and say the right things:
I’m not a robot by any means. But I’ve learned to watch what I say. If there’s one rule that applies everywhere, it’s that Irony Doesn’t Scale. Jokes and asides can be taken out of context; witty complaints can be read as lack of enthusiasm. People are watching closely for clues to their future. Your dry little bon mot can be read as “He’s joking but maybe we are doomed!” You are always just one hilarious joke away from a sick system.
It's a useful post, particuarly for anyone in a leadership position.

Source: Track Changes (via Offscreen newsletter /48)

Web Trends Map 2018 (or 'why we can't have nice things')

My son, who’s now 11 years old, used to have iA’s Web Trends Map v4 on his wall. It was produced in 2009, when he was two:

iA Web Trends Map 4 (2009)

I used it to explain the web to him, as the subway map was a metaphor he could grasp. I’d wondered why iA hadn’t produced more in subsequent years.

Well, the answer is clear in a recent post:

Don’t get too excited. We don’t have it. We tried. We really tried. Many times. The most important ingredient for a Web Trend Map is missing: The Web. Time to bring some of it back.
Basically, the web has been taken over by capitalist interests:
The Web has lost its spirit. The Web is no longer a distributed Web. It is, ironically, a couple of big tubes that belong to a handful of companies. Mainly Google (search), Facebook (social) and Amazon (e-commerce). There is an impressive Chinese line and there are some local players in Russia, Japan, here and there. Overall it has become monotonous and dull. What can we do?
It's difficult. Although I support the aims, objectives, and ideals of the IndieWeb, I can't help but think it's looking backwards instead of forwards. I'm hoping that newer approaches such as federated social networks, distributed ledgers and databases, and regulation such as GDPR have some impact.

Source: iA

So, what do you do?

Say what you want about teaching, it makes it extremely easy to answer the above question.

But that question might not be the best way to build rapport with someone else. In fact, it may be best to avoid talking about work entirely.
It's better, apparently, to find shared ground about common goals and interests:
Research findings from the world of network science and psychology suggests that we tend to prefer and seek out relationships where there is more than one context for connecting with the other person. Sociologists refer to these as multiplex ties, connections where there is an overlap of roles or affiliations from a different social context. If a colleague at work sits on the same nonprofit board as you, or sits next to you in spin class at the local gym, then you two share a multiplex tie. We may prefer relationships with multiplex ties because research suggests that relationships built on multiplex ties tend to be richer, more trusting, and longer lasting
The author of this article suggests you can ask the following questions instead:
  • What excites you right now?
  • What are you looking forward to?
  • What’s the best thing that happened to you this year?
  • Where did you grow up?
  • What do you do for fun?
  • Who is your favorite superhero?
  • Is there a charitable cause you support?
  • What’s the most important thing I should know about you?

Unfortunately, unlike the ubiquitous, “So, do you do?” none of these are useful as conversation-starters. And then, after I’ve corrected for Britishness, there’s exactly zero I’d use in the course of serious adult conversation…

Source: Harvard Business Review

The military implications of fitness tech

I was talking about this last night with a guy who used to be in the army. It’s a BFD.

In March 2017, a member of the Royal Navy ran around HMNB Clyde, the high-security military base that's home to Trident, the UK's nuclear deterrent. His pace wasn't exceptional, but it wasn't leisurely either.

His run, like millions of others around the world, was recorded through the Strava app. A heatmap of more than one billion activities – comprising of 13 billion GPS data points – has been criticised for showing the locations of supposedly secretive military bases. It was thought that, at the very least, the data was totally anonymised. It isn't.

Oops.

The fitness app – which can record a person's GPS location and also host data from devices such as Fitbits and Garmin watches – allows users to create segments and leaderboards. These are areas where a run, swim, or bike ride can be timed and compared. Segments can be seen on the Strava website, rather than on the heatmap.

Computer scientist and developer Steve Loughran detailed how to create a GPS segment and upload it to Strava as an activity. Once uploaded, a segment shows the top times of people running in an area. Which is how it's possible to see the running routes of people inside the high-security walls of HMNB Clyde.

Of course, this is an operational security issue. The military personnel shouldn't really be using Strava while they're living/working on bases.

"The underlying problem is that the devices we wear, carry and drive are now continually reporting information about where and how they are used 'somewhere'," Loughran said. "In comparison to the datasets which the largest web companies have, Strava's is a small set of files, voluntarily uploaded by active users."

Source: WIRED

Audrey Watters on technology addiction

Audrey Watters answers the question whether we’re ‘addicted’ to technology:

I am hesitant to make any clinical diagnosis about technology and addiction – I’m not a medical professional. But I’ll readily make some cultural observations, first and foremost, about how our notions of “addiction” have changed over time. “Addiction” is medical concept but it’s also a cultural one, and it’s long been one tied up in condemning addicts for some sort of moral failure. That is to say, we have labeled certain behaviors as “addictive” when they’ve involve things society doesn’t condone. Watching TV. Using opium. Reading novels. And I think some of what we hear in discussions today about technology usage – particularly about usage among children and teens – is that we don’t like how people act with their phones. They’re on them all the time. They don’t make eye contact. They don’t talk at the dinner table. They eat while staring at their phones. They sleep with their phones. They’re constantly checking them.
The problem is that our devices are designed to be addictive, much like casinos. The apps on our phones are designed to increase certain metrics:
I think we’re starting to realize – or I hope we’re starting to realize – that those metrics might conflict with other values. Privacy, sure. But also etiquette. Autonomy. Personal agency. Free will.
Ultimately, she thinks, this isn't a question of addiction. It's much wider than that:
How are our minds – our sense of well-being, our knowledge of the world – being shaped and mis-shaped by technology? Is “addiction” really the right framework for this discussion? What steps are we going to take to resist the nudges of the tech industry – individually and socially and yes maybe even politically?
Good stuff.

Source: Audrey Watters

No cash, no freedom?

The ‘cashless’ society, eh?

Every time someone talks about getting rid of cash, they are talking about getting rid of your freedom. Every time they actually limit cash, they are limiting your freedom. It does not matter if the people doing it are wonderful Scandinavians or Hindu supremacist Indians, they are people who want to know and control what you do to an unprecedentedly fine-grained scale.
Yep, just because someone cool is doing it doesn't mean it won't have bad consequences. In the rush to add technology to things, we create future dystopias.
Cash isn’t completely anonymous. There’s a reason why old fashioned crooks with huge cash flows had to money-launder: Governments are actually pretty good at saying, “Where’d you get that from?” and getting an explanation. Still, it offers freedom, and the poorer you are, the more freedom it offers. It also is very hard to track specifically, i.e., who made what purchase.

Blockchains won’t be untaxable. The ones which truly are unbreakable will be made illegal; the ones that remain, well, it’s a ledger with every transaction on it, for goodness sakes.

It’s this bit that concerns me:

We are creating a society where even much of what you say, will be knowable and indeed, may eventually be tracked and stored permanently.

If you do not understand why this is not just bad, but terrible, I cannot explain it to you. You have some sort of mental impairment of imagination and ethics.

Source: Ian Welsh

Depression as an evolutionary advantage?

It’s been almost 15 years since I suffered from depression. Since that time, I’ve learned to look after myself mentally and physically to resist whatever natural tendency I have towards spiralling downwards.

I found this article fascinating.

Some psychologists... have argued that depression is not a dysfunction at all, but an evolved mechanism designed to achieve a particular set of benefits.
The dominant popular view seems to be that there's something wrong with your brain chemistry, so exercise, antidepressants and counselling can fix it.
Paul Andrews, an evolutionary psychologist now at McMaster University...  noted that the physical and mental symptoms of depression appeared to form an organized system. There is anhedonia, the lack of pleasure or interest in most activities. There’s an increase in rumination, the obsessing over the source of one’s pain. There’s an increase in certain types of analytical ability. And there’s an uptick in REM sleep, a time when the brain consolidates memories.
However, for me, the fix was to get out of the terrible situation I was in, a teaching job in a very tough school.
If something is broken in your life, you need to bear down and mend it. In this view, the disordered and extreme thinking that accompanies depression, which can leave you feeling worthless and make you catastrophize your circumstances, is needed to punch through everyday positive illusions and focus you on your problems. In a study of 61 depressed subjects, 4 out of 5 reported at least one upside to their rumination, including self-insight, problem solving, and the prevention of future mistakes.
I suffer from migraines, which are bizarre episodes. They're difficult to explain to people as they're a whole-body response. Changing my lifestyle so I don't get migraines is a micro-version of the kind of lifestyle changes you need to make to stave off depression.
These theories do cast some of our traditional responses to depression in a new light, however. If depression is a strategic response that we are programmed to carry out, consciously or unconsciously, does it make sense to try to suppress its symptoms through, say, the use of antidepressants? [Edward] Hagen [an anthropologist at Washington State University] describes antidepressants as painkillers, arguing that it would be unethical for a doctor to treat a broken ankle with Percocet and no cast. You need to fix the underlying problem.
I can't imagine being on antidepressants for any more than a few weeks (as I was). They dull your mind, which allows you to cope with the world as it is, but don't (in my experience) allow you lead a flourishing human life.
Even if depression evolved as a useful tool over the eons, that doesn’t make it useful today. We’ve evolved to crave sugar and fat, but that adaptation is mismatched with our modern environment of caloric abundance, leading to an epidemic of obesity. Depression could be a mismatched condition. Hagen concedes that for most of evolution, we lived with relatives and spent all day with people ready to intervene in our lives, so that episodes of depression might have led to quick solutions. Today, we’re isolated, and we move from city to city, engaging with people less invested in our reproductive fitness. So depressive signals may go unheeded and then compound, leading to consistent, severe dysfunction. A Finnish study found that as urbanization and modernization have increased over the last two centuries, so have suicide rates. That doesn’t mean depression is no longer functional (if indeed it ever was), just that in the modern world it may misfire more than we’d like.
Source: Nautilus

Product managers as knowledge centralisers

If you asked me what I do for a living, I’d probably respond that I work for Moodle, am co-founder of a co-op, and also do some consultancy. What I probably wouldn’t say, although it would be true, is that I’m a product manager.

I’m not particularly focused on ‘commercial success’ but the following section of this article certainly resonates:

When I think of what a great product manager’s qualities should be, I find myself considering where the presence of this role is felt the most. When successful, the outside world perceives commercial success but internally, over the course of building the product, a team would gain a sense of confidence, rooted in a better understanding of the problem being addressed, a higher level of focus and an overall higher level of aptitude. If I were to summarize what I feel a great product manager’s qualities are, it would be the constant dedication to centralizing knowledge for a team in all aspects of the role — the UX, the technology and the strategy.

We haven't got all of the resourcing in place for Project MoodleNet yet, so I'm spending my time making sure the project is set up for success. Things like sorting out the process of how we communicate, signal that things are blocked/finished/need checking, that the project will be GDPR-compliant, that the risk register is complete, that we log decisions.
Product management has been popularized as a role that unified the business, technology and UX/Design demands of a software team. Many of the more established product managers have often noted that they “stumbled” into the role without knowing what their sandbox was and more often than not, they did not even hold the title itself.
Being a product manager is an interdisciplinary role, and I should imagine that most have had varied careers to date. I certainly have.
There is a lot of thinking done around what the ideal product manager should have the power to do and it often hinges around locking down a vision and seeing it through to it’s execution and data collection. However, this portrayal of a product manager as an island of synergy, knowledge and the perfect intersection of business, tech and design is not where the meaty value of the role lies.

[…]

A sense of discipline in the daily tasks such as sprint planning and retrospectives, collecting feedback from users, stand up meetings and such can be seen as something that is not just done for the purpose of order and structure, but as a way of reinforcing and democratizing the institutional knowledge between members of a team. The ability for a team to pivot, the ability to reach consensus, is a byproduct of common, centralized knowledge that is built up from daily actions and maintained and kept alive by the product manager. In the rush of a delivery and of creative chaos , this sense of structure and order has to be lovingly maintained by someone in order for a team to really internally benefit from the fruits of their labour over time.

It’s a great article, and well worth a read.

Source: We Seek

Using VR with kids

I’ve seen conflicting advice regarding using Virtual Reality (VR) with kids, so it’s good to see this from the LSE:

Children are becoming aware of virtual reality (VR) in increasing numbers: in autumn 2016, 40% of those aged 2-15 surveyed in the US had never heard of VR, and this number was halved less than one year later. While the technology is appealing and exciting to children, its potential health and safety issues remain questionable, as there is, to date, limited research into its long-term effects.

I have given my two children (six and nine at the time) experience of VR — albeit in limited bursts. The concern I have is about eyesight, mainly.

As a young technology there are still many unknowns about the long-term risks and effects of VR gaming, although Dubit found no negative effects from short-term play for children’s visual acuity, and little difference between pre- and post-VR play in stereoacuity (which relies on good eyesight for both eyes and good coordination between the two) and balance tests. Only 2 of the 15 children who used the fully immersive head-mounted display showed some stereoacuity after-effects, and none of those using the low-cost Google Cardboard headset showed any. Similarly, a few seemed to be at risk of negative after-effects to their balance after using VR, but most showed no problems.

There's some good advice in this post for VR games/experience designers, and for parents. I'll quote the latter:

While much of a child’s experience with VR may still be in museums, schools or other educational spaces under the guidance of trained adults, as the technology becomes more available in domestic settings, to ensure health and safety at home, parents and carers need to:

  • Allow children to preview the game on YouTube, if available.
  • Provide children with time to readjust to the real world after playing, and give them a break before engaging with activities like crossing roads, climbing stairs or riding bikes, to ensure that balance is restored.
  • Check on the child’s physical and emotional wellbeing after they play.
There's a surprising lack of regulation and guidance in this space, so it's good to see the LSE taking the initiative!

Source: Parenting for a Digital Future

Augmented and Virtual Reality on the web

There were a couple of exciting announcments last week about web technologies being used for Augmented Reality (AR) and Virtual Reality (VR). Using standard technologies that can be used across a range of devices is a game-changer.

First off, Google announced ‘Article’ which provides an straightforward way to add virtual objects to physical spaces.

Google AR

Mozilla, meanwhile directed attention towards A-Frame, which they’ve been supporting for a while. This allows VR experiences to be created using web technologies, including networking users together in-world.

Mozilla VR

Although each have their uses, I think AR is going to be a much bigger deal than Virtual Reality (VR) for most people, mainly because it adds to an experience we’re used to (i.e. the world around us) rather than replacing it.

Sources: Google blog / A-Frame

The horror of the Bett Show

I’ve been to the Bett Show (formely known as BETT, which is how the author refers to it in this article) in many different guises. I’ve been as a classroom teacher, school senior leader, researcher in Higher Education, when I was working in different roles at Mozilla, as a consultant, and now in my role at Moodle.

I go because it’s free, and because it’s a good place to meet up with people I see rarely. While I’ve changed and grown up, the Bett Show is still much the same. As Junaid Mubeen, the author of this article, notes:  

The BETT show is emblematic of much that EdTech gets wrong. No show captures the hype of educational technology quite like the world’s largest education trade show. This week marked my fifth visit to BETT at London’s Excel arena. True to form, my two days at the show left me feeling overwhelmed with the number of products now available in the EdTech market, yet utterly underwhelmed with the educational value on offer.

It's laughable, it really is. I saw all sorts of tat while I was there. I heard that a decent sized stand can set you back around a million pounds.

One senses from these shows that exhibitors are floating from one fad to the next, desperately hoping to attach their technological innovations to education. In this sense, the EdTech world is hopelessly predictable; expect blockchain applications to emerge in not-too-distant future BETT shows.

But of course. I felt particularly sorry this year for educators I know who were effectively sales reps for the companies they've gone to work for. I spent about five hours there, wandering, talking, and catching up with people. I can only imagine the horror of being stuck there for four days straight.

I like the questions Mubeen comes up with. However, the edtech companies are playing a different game. While there’s some interested in pedagogical development, for most of them it’s just another vertical market.

In the meantime, there are four simple questions every self-professed education innovator should demand of themselves:

  • What is your pedagogy? At the very least, can you list your educational goals?
  • What does it mean for your solution to work and how will this be measured in a way that is meaningful and reliable?
  • How are your users supported to achieve their educational goals after the point of sale?
  • How do your solutions interact with other offerings in the marketplace?
Somewhat naïvely, the author says that he looks forward to the day when exhibitors are selected "not on their wallet size but on their ability to address these foundational questions". As there's a for-profit company behind Bett, I think he'd better not hold his breath.

Source: Junaid Mubeen

Issue #289: Loooooong week

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

More haste, less speed

In the last couple of years, there’s been a move to give names to security vulnerabilities that would be otherwise too arcane to discuss in the mainstream media. For example, back in 2014, Heartbleed, “a security bug in the OpenSSL cryptography library, which is a widely used implementation of the Transport Layer Security (TLS) protocol”, had not only a name but a logo.

The recent media storm around the so-called ‘Spectre’ and ‘Meltdown’ shows how effective this approach is. It also helps that they sound a little like James Bond science fiction.

In this article, Zeynep Tufekci argues that the security vulnerabilities are built on our collective desire for speed:

We have built the digital world too rapidly. It was constructed layer upon layer, and many of the early layers were never meant to guard so many valuable things: our personal correspondence, our finances, the very infrastructure of our lives. Design shortcuts and other techniques for optimization — in particular, sacrificing security for speed or memory space — may have made sense when computers played a relatively small role in our lives. But those early layers are now emerging as enormous liabilities. The vulnerabilities announced last week have been around for decades, perhaps lurking unnoticed by anyone or perhaps long exploited.
Helpfully, she gives a layperson's explanation of what went wrong with these two security vulnerabilities:

Almost all modern microprocessors employ tricks to squeeze more performance out of a computer program. A common trick involves having the microprocessor predict what the program is about to do and start doing it before it has been asked to do it — say, fetching data from memory. In a way, modern microprocessors act like attentive butlers, pouring that second glass of wine before you knew you were going to ask for it.

But what if you weren’t going to ask for that wine? What if you were going to switch to port? No problem: The butler just dumps the mistaken glass and gets the port. Yes, some time has been wasted. But in the long run, as long as the overall amount of time gained by anticipating your needs exceeds the time lost, all is well.

Except all is not well. Imagine that you don’t want others to know about the details of the wine cellar. It turns out that by watching your butler’s movements, other people can infer a lot about the cellar. Information is revealed that would not have been had the butler patiently waited for each of your commands, rather than anticipating them. Almost all modern microprocessors make these butler movements, with their revealing traces, and hackers can take advantage.

Right now, she argues, systems have to employ more and more tricks to squeeze performance out of hardware because the software we use is riddled with surveillance and spyware.

But the truth is that our computers are already quite fast. When they are slow for the end-user, it is often because of “bloatware”: badly written programs or advertising scripts that wreak havoc as they try to track your activity online. If we were to fix that problem, we would gain speed (and avoid threatening and needless surveillance of our behavior).

As things stand, we suffer through hack after hack, security failure after security failure. If commercial airplanes fell out of the sky regularly, we wouldn’t just shrug. We would invest in understanding flight dynamics, hold companies accountable that did not use established safety procedures, and dissect and learn from new incidents that caught us by surprise.

And indeed, with airplanes, we did all that. There is no reason we cannot do the same for safety and security of our digital systems.

There have been patches going out over the past few weeks since the vulnerabilities came to light from major vendors. For-profit companies have limited resources, of course, and proprietary, closed-source code. This means there'll be some devices that won't get the security updates at all, leaving end users in a tricky situation: their hardware is now almost worthless. So do they (a) keep on using it, crossing their fingers that nothing bad happens, or (b) bite the bullet and upgrade?

What I think the communities I’m part of could have done better at is shout loudly that there’s an option (c): open source software. No matter how old your hardware, the chances are that someone, somewhere, with the requisite skills will want to fix the vulnerabilities on that device.

Source: The New York Times

Ethical design in social networks

I’m thinking a lot about privacy and ethical design at the moment as part of my role leading Project MoodleNet. This article gives a short but useful overview of the Ethical Design Manifesto, along with some links for further reading:

There is often a disconnect between what digital designers originally intend with a product or feature, and how consumers use or interpret it.

Ethical user experience design – meaning, for example, designing technologies in ways that promote good online behaviour and intuit how they might be used – may help bridge that gap.

There’s already people (like me) making choices about the technology and social networks they used based on ethics:

User experience design and research has so far mainly been applied to designing tech that is responsive to user needs and locations. For example, commercial and digital assistants that intuit what you will buy at a local store based on your previous purchases.

However, digital designers and tech companies are beginning to recognise that there is an ethical dimension to their work, and that they have some social responsibility for the well-being of their users.

Meeting this responsibility requires designers to anticipate the meanings people might create around a particular technology.

In addition to ethical design, there are other elements to take into consideration:

Contextually aware design is capable of understanding the different meanings that a particular technology may have, and adapting in a way that is socially and ethically responsible. For example, smart cars that prevent mobile phone use while driving.

Emotional design refers to technology that elicits appropriate emotional responses to create positive user experiences. It takes into account the connections people form with the objects they use, from pleasure and trust to fear and anxiety.

This includes the look and feel of a product, how easy it is to use and how we feel after we have used it.

Anticipatory design allows technology to predict the most useful interaction within a sea of options and make a decision for the user, thus “simplifying” the experience. Some companies may use anticipatory design in unethical ways that trick users into selecting an option that benefits the company.

Source: The Conversation

Reading the web on your own terms

Although it was less than a decade ago since the demise of the wonderful, simple, much-loved Google Reader, it seems like it was a different age entirely.

Subscribing to news feeds and blogs via RSS wasn’t as widely used as it could/should have been, but there was something magical about that period of time.

In this article, the author reflects on that era and suggests that we might want to give it another try:

Well, I believe that RSS was much more than just a fad. It made blogging possible for the first time because you could follow dozens of writers at the same time and attract a considerably large audience if you were the writer. There were no ads (except for the high-quality Daring Fireball kind), no one could slow down your feed with third party scripts, it had a good baseline of typographic standards and, most of all, it was quiet. There were no comments, no likes or retweets. Just the writer’s thoughts and you.
I was a happy user of Google Reader until they pulled the plug. It was a bit more interactive than other feed readers, somehow, in a way I can't quite recall. Everyone used it until they didn't.
The unhealthy bond between RSS and Google Reader is proof of how fragile the web truly is, and it reveals that those communities can disappear just as quickly as they bloom.
Since that time I've been an intermittent user of Feedly. Everyone else, it seems, succumbed to the algorithmic news feeds provided by Facebook, Twitter, and the like.
A friend of mine the other day said that “maybe Medium only exists because Google Reader died — Reader left a vacuum, and the social network filled it.” I’m not entirely sure I agree with that, but it sure seems likely. And if that’s the case then the death of Google Reader probably led to the emergence of email newsletters, too.

[…]

On a similar note, many believe that blogging is making a return. Folks now seem to recognize the value of having your own little plot of land on the web and, although it’s still pretty complex to make your own website and control all that content, it’s worth it in the long run. No one can run ads against your thing. No one can mess with the styles. No one can censor or sunset your writing.

Not only that but when you finish making your website you will have gained superpowers: you now have an independent voice, a URL, and a home on the open web.

I don’t think we can turn the clock back, but it does feel like there might be positive, future-focused ways of improving things through, for example, decentralisation.

Source: Robin Rendle

The NSA (and GCHQ) can find you by your 'voiceprint' even if you're speaking a foreign language on a burner phone

This is pretty incredible:

Americans most regularly encounter this technology, known as speaker recognition, or speaker identification, when they wake up Amazon’s Alexa or call their bank. But a decade before voice commands like “Hello Siri” and “OK Google” became common household phrases, the NSA was using speaker recognition to monitor terrorists, politicians, drug lords, spies, and even agency employees.

The technology works by analyzing the physical and behavioral features that make each person’s voice distinctive, such as the pitch, shape of the mouth, and length of the larynx. An algorithm then creates a dynamic computer model of the individual’s vocal characteristics. This is what’s popularly referred to as a “voiceprint.” The entire process — capturing a few spoken words, turning those words into a voiceprint, and comparing that representation to other “voiceprints” already stored in the database — can happen almost instantaneously. Although the NSA is known to rely on finger and face prints to identify targets, voiceprints, according to a 2008 agency document, are “where NSA reigns supreme.”

Hmmm….

The voice is a unique and readily accessible biometric: Unlike DNA, it can be collected passively and from a great distance, without a subject’s knowledge or consent. Accuracy varies considerably depending on how closely the conditions of the collected voice match those of previous recordings. But in controlled settings — with low background noise, a familiar acoustic environment, and good signal quality — the technology can use a few spoken sentences to precisely match individuals. And the more samples of a given voice that are fed into the computer’s model, the stronger and more “mature” that model becomes.
So yeah, let's put a microphone in every room of our house so that we can tell Alexa to turn off the lights. What could possibly go wrong?

Source: The Intercept

Favourable winds

“If a man does not know to what port he is steering, no wind is favourable to him.”

(Seneca)

Listening to video game soundtracks can improve your productivity

I can attest to the power of this, particularly the Halo soundtrack:

As I write these words, a triumphant horn is erupting in my ear over the rhythmic bowing of violins. In fact, as you read, I would encourage you to listen along—just search “Battlefield One.” I bet you'll focus just a bit better with it playing in the background. After all, as a video game soundtrack it's designed to have exactly that effect.

This is, by far, the best Life Pro Tip I’ve ever gotten or given: Listen to music from video games when you need to focus. It’s a whole genre designed to simultaneously stimulate your senses and blend into the background of your brain, because that’s the point of the soundtrack. It has to engage you, the player, in a task without distracting from it. In fact, the best music would actually direct the listener to the task.

These days I prefer to listen to Brain.fm after I got a lifetime deal via AppSumo a year or so ago. I enjoy music as an art form, but I also appreciate it for the effect it can have on my brain.

Source: Popular Science

 

Technology to connect and communicate

People going to work in factories and offices is a relatively recent invention. For most of human history, people have worked from, or very near to, their home.

But working from home these days is qualitatively different, because we have the internet, as Sarah Jaffe points out in a recent newsletter:

Freelancing is a strange way to work, not because self-supervised labor in the home doesn't have a long history that well predates leaving your house to go to a workplace, but because it relies so much on communication with the outside. I'm waiting on emails from editors and so I am writing to you, my virtual water-cooler companions.

[…]

The internet, then, serves to make work less isolated. I have chats going a lot of the day, unless I’m in super drill-down writing mode, which is less of my job than many people probably expect. My friends have helped me figure out thorny issues in a piece I’m writing and helped me figure out what to write in an email to an editor who’s dropped off the face of the earth and advised me on how much money to ask for. It’s funny, there are so many stories about the way the internet is making us lonely and isolated, and it is sometimes my only human contact. My voice creaked when I answered the phone this morning because I hadn’t yet used it today.

The problem is that capitalism forces us into a situation where we’re competing with others rather than collaborating with them:

How do we use technology to connect and communicate rather than compete? How do we have conversations that further our understandings of things?
I don't actually think it's solely a technology problem, although every technology has inbuilt biases. It's also a problem to be solved at the societal 'operating system' level through, for example, co-owning the organisation for which you work.

Source: Sarah Jaffe

Are conferences a vestige of a bygone era?

I’m certainly attending fewer conferences than I used to, but I thought that was just the changing nature of my work and ways of making a living.

Marco Arment makes some important points in this post about how conferences are just kind of outdated as a concept:

  • Cost: With flights, lodging, and the ticket adding up to thousands of dollars per conference, most people are priced out. The vast majority of attendees’ money isn’t even going to the conference organizers or speakers — it’s going to venues, hotels, and airlines.
  • Size: There’s no good size for a conference. Small conferences exclude too many people; big conferences impede socialization and logistics.
  • Logistics: Planning and executing a conference takes such a toll on the organizers that few of them have ever lasted more than a few years.
  • Format: Preparing formal talks with slide decks is a massively inefficient use of the speakers’ time compared to other modern methods of communicating ideas, and sitting there listening to blocks of talks for long stretches while you’re trying to stay awake after lunch is a pretty inefficient way to hear ideas.
This has always been the case, of course. It's just that technology-mediated ways of connecting, both synchronously and asynchronously, have improved:
Podcasts are a vastly more time-efficient way for people to communicate ideas than writing conference talks, and people who prefer crafting their message as a produced piece or with multimedia can do the same thing (and more) on YouTube. Both are much easier and more versatile for people to consume than conference talks, and they can reach and benefit far more people.
Conferences are by their very nature exclusive and take up a lot of people's time. There's still space for them, but I think time is up for the low-quality, just-for-the-sake-of-it conference.

Source: Marco.org

A useful IndieWeb primer

I’ve followed the IndieWeb movement since its inception, but it’s always seemed a bit niche. I love (and use) the POSSE model, for example, but expecting everyone to have domain of their own stacked with open source software seems a bit utopian right now.

I was surprised and delighted, therefore, to see a post on the GoDaddy blog extolling the virtues of the IndieWeb for business owners. The author explains that the IndieWeb movement was born of frustration:

Frustration from software developers who like the idea of social media, but who do not want to hand over their content to some big, unaccountable internet company that unilaterally decides who gets to see what.

Frustration from writers and content creators who do not want a third party between them and the people they want to reach.

Frustration from researchers and journalists who need a way to get their message out without depending on the whim of a big company that monitors, and sometimes censors, what they have to say.

He does a great job of explaining, with an appropriate level of technical detail, how to get started. The thing I'd really like to see in particular is people publishing details of events at a public URL instead of (just) on Facebook:
Importantly, with IndieAuth, you can log into third-party websites using your own domain name. And your visitors can log into your website with their domain name. Or, if you organize events, you can post your event announcement right on your website, and have attendees RSVP either from their own IndieWeb sites, or natively on a social site.
A recommended read. I'll be pointing people to this in future!

Source: GoDaddy

Three most harmful addictions

“The three most harmful addictions are heroin, carbohydrates, and a monthly salary.”

(Nassim Nicholas Taleb)

More on Facebook's 'trusted news' system

Mike Caulfield reflects on Facebook’s announcement that they’re going to allow users to rate the sources of news in terms of trustworthiness. Like me, and most people who have thought about this for more than two seconds, he thinks it’s a bad idea.

Instead, he thinks Facebook should try Google’s approach:

Most people misunderstand what the Google system looks like (misreporting on it is rife) but the way it works is this. Google produces guidance docs for paid search raters who use them to rate search results (not individual sites). These documents are public, and people can argue about whether Google’s take on what constitutes authoritative sources is right — because they are public.
Facebook's algorithms are opaque by design, whereas, Caulfield argues, Google's approach is documented:
I’m not saying it doesn’t have problems — it does. It has taken Google some time to understand the implications of some of their decisions and I’ve been critical of them in the past. But I am able to be critical partially because we can reference a common understanding of what Google is trying to accomplish and see how it was falling short, or see how guidance in the rater docs may be having unintended consequences.
This is one of the major issues of our time, particularly now that people have access to the kind of CGI only previously available to Hollywood. And what are they using this AI-powered technology for? Fake celebrity (and revenge) porn, of course.

Source: Hapgood

Living in capitalism

“We live in capitalism, its power seems inescapable – but then, so did the divine right of kings. Any human power can be resisted and changed by human beings.”

(Ursula Le Guin)

Anxiety is the price of convenience

Remote working, which I’ve done for over five years now, sounds awesome, doesn’t it? Open your laptop while still in bed, raid the biscuit barrel at every opportunity, spend more time with your family…

Don’t get me wrong, it is great and I don’t think I could ever go back to working full-time in an office. That being said, there’s a hidden side to remote working which no-one ever tells you about: anxiety.

Every interaction when you’re working remotely is an intentional act. You either have to schedule a meeting with someone, or ‘ping’ them to see if they’re available. You can’t see that they’re free, wander over to talk to them, or bump into them in the corridor, as you could if you were physically co-located.

When people don’t respond in a timely fashion, or within the time frame you were expecting, it’s unclear why that happened. This article picks up on that:

In recent decades, written communication has caught up—or at least come as close as it’s likely to get to mimicking the speed of regular conversation (until they implant thought-to-text microchips in our brains). It takes more than 200 milliseconds to compose a text, but it’s not called “instant” messaging for nothing: There is an understanding that any message you send can be replied to more or less immediately.

But there is also an understanding that you don’t have to reply to any message you receive immediately. As much as these communication tools are designed to be instant, they are also easily ignored. And ignore them we do. Texts go unanswered for hours or days, emails sit in inboxes for so long that “Sorry for the delayed response” has gone from earnest apology to punchline.

It’s not just work, either. Because we carry our smartphones with us everywhere, my wife expects almost an instantaneous response on even the most trivial matters. I’ve come back to my phone with a stream of ‘oi’ messages before…

It’s anxiety-inducing because written communication is now designed to mimic conversation—but only when it comes to timing. It allows for a fast back-and-forth dialogue, but without any of the additional context of body language, facial expression, and intonation. It’s harder, for example, to tell that someone found your word choice off-putting, and thus to correct it in real-time, or try to explain yourself better. When someone’s in front of you, “you do get to see the shadow of your words across someone else’s face,” [Sherry] Turkle says.
Lots to ponder here. A lot of it has to do with the culture of your organisation / family, at the end of the day.

Source: The Atlantic (via Hurry Slowly)

Different sorts of time

Growing up, I always thought I’d write for a living. Initially, I wanted to be a journalist, but as it turns out, thinking and writing is about 75% of what I do on a weekly basis.

I’m always interested in how people who write full-time structure the process. This, from Jon McGregor, struck a chord with me:

There are other sorts of time, besides the writing time. There is thinking time, reading time, research time and sketching out ideas time. There is working on the first page over and over again until you find the tone you’re looking for time. There is spending just five minutes catching up on email time. There is spending five minutes more on Twitter because, in a way, that is part of the research process time. There is writing time, somewhere in there. There is making the coffee and clearing away the coffee and thinking about lunch and making the lunch and clearing away the lunch time. There is stretching the legs time. There is going for a long walk because all the great writers always talk about walking time being the best thinking time, and then there is getting back from that walk and realising what the hell the time is now time. There’s looking back over what you’ve written so far and deciding it is all a load of awkwardly phrased bobbins time; there is wondering what kind of a way this is to make a living at all time. There is finding the tail-end of an idea that might just work and trying to get that down on the page before you run out of time time. There is answering emails that just can’t be put off any longer time. There is moving to another table and setting a timer and refusing to look up from the page until you’ve written for 40 minutes solid time. There is reading that back and crossing it out time. And then there is running out of the door and trying to get to the school gates at anything like a decent time time.
I've written before, elsewhere, about how difficult it is for knowledge workers such as writers to quantify what counts as 'work'. Does a walk in the park while thinking about what you're going to write count? What about when you're in the shower planning something out?

It’s complicated.

Source: The Guardian

Some podcast recommendations

Despite no longer having a commute, I still find time to listen to podcasts. They’re useful for a variety of reasons: I can be doing something else while listening to them such as walking, going to the gym, or boring admin, and they don’t require me to look at a screen (which I do most of the day).

So it’s very useful for Bryan Alexander to share the podcasts he’s listening to at present. Here’s a couple that were new to me:

Beyond the Book – a look into the book publishing industry. It’s clearly biased in favor of strong copyright policies and practices, a bias I don’t share, but the program is also very informative.

Very Bad Wizards – two thinkers and, sometimes, a guest brood about deep questions concerning human psychology, philosophy, and ethics. It’s not my usual fare, so I enjoy learning.

Podcasts are basically RSS feeds with an audio enclosures as such, they can be exported as OPML files. Most podcast clients, including AntennaPod (which I use) allow you to do this.

Here’s my OPML file, as of today. I don’t listen to all of these podcasts regularly, just dipping in and out of them. My top five favourites are:

There's also, obviously, Today In Digital Education (TIDE) which I record with Dai Barnes. Well be releasing our first episode of 2018 later this week!

Source: Bryan Alexander

DuckDuckGo moves beyond search

This is excellent news:

Today we’re taking a major step to simplify online privacy with the launch of fully revamped versions of our browser extension and mobile app, now with built-in tracker network blocking, smarter encryption, and, of course, private search – all designed to operate seamlessly together while you search and browse the web. Our updated app and extension are now available across all major platforms – Firefox, Safari, Chrome, iOS, and Android – so that you can easily get all the privacy essentials you need on any device with just one download.
I have a multitude of blockers installed, which makes it difficult to recommend just one to people. Hopefully this will simplify things:
For the last decade, DuckDuckGo has been giving you the ability to search privately, but that privacy was only limited to our search box. Now, when you also use the DuckDuckGo browser extension or mobile app, we will provide you with seamless privacy protection on the websites you visit. Our goal is to expand this privacy protection over time by adding even more privacy features into this single package. While not all privacy protection can be as seamless, the essentials available today and those that we will be adding will go a long way to protecting your privacy online, without compromising your Internet experience.
It looks like the code is all open source, too! 👏 👏 👏

Source: DuckDuckGo blog

Facebook is under attack

This year is a time of reckoning for the world’s most popular social network. From their own website (which I’ll link to via archive.org because I don’t link to Facebook). Note the use of the passive voice:

Facebook was originally designed to connect friends and family — and it has excelled at that. But as unprecedented numbers of people channel their political energy through this medium, it’s being used in unforeseen ways with societal repercussions that were never anticipated.
It's pretty amazing that a Facebook spokesperson is saying things like this:
I wish I could guarantee that the positives are destined to outweigh the negatives, but I can’t. That’s why we have a moral duty to understand how these technologies are being used and what can be done to make communities like Facebook as representative, civil and trustworthy as possible.
What they are careful to do is to paint a picture of Facebook as somehow 'neutral' and being 'hijacked' by bad actors. This isn't actually the case.

As an article in The Guardian points out, executives at Facebook and Twitter aren’t exactly heavy users of their own platforms:

It is a pattern that holds true across the sector. For all the industry’s focus on “eating your own dog food”, the most diehard users of social media are rarely those sitting in a position of power.
These sites are designed to be addictive. So, just as drug dealers "don't get high on their own supply", so those designing social networks know what they're dealing with:
These addictions haven’t happened accidentally... Instead, they are a direct result of the intention of companies such as Facebook and Twitter to build “sticky” products, ones that we want to come back to over and over again. “The companies that are producing these products, the very large tech companies in particular, are producing them with the intent to hook. They’re doing their very best to ensure not that our wellbeing is preserved, but that we spend as much time on their products and on their programs and apps as possible. That’s their key goal: it’s not to make a product that people enjoy and therefore becomes profitable, but rather to make a product that people can’t stop using and therefore becomes profitable.
The trouble is that this advertising-fuelled medium which is built to be addictive, is the place where most people get their news these days. Facebook has realised that it has a problem in this regard so they've made the decision to pass the buck to users. Instead of Facebook, or anyone else, deciding which news sources an individual should trust, it's being left up to users.

While this sounds empowering and democratic, I can’t help but think it’s a bad move. As The Washington Post notes:

“They want to avoid making a judgment, but they are in a situation where you can’t avoid making a judgment,” said Jay Rosen, a journalism professor at New York University. “They are looking for a safe approach. But sometimes you can be in a situation where there is no safe route out.”
The article continues to cite former Facebook executives who think that the problems are more than skin-deep:
They say that the changes the company is making are just tweaks when, in fact, the problems are a core feature of the Facebook product, said Sandy Parakilas, a former Facebook privacy operations manager.

“If they demote stories that get a lot of likes, but drive people toward posts that generate conversation, they may be driving people toward conversation that isn’t positive,” Parakilas said.

A final twist in the tale is that Rupert Murdoch, a guy who has no morals but certainly has a valid point here, has made a statement on all of this:

If Facebook wants to recognize ‘trusted’ publishers then it should pay those publishers a carriage fee similar to the model adopted by cable companies. The publishers are obviously enhancing the value and integrity of Facebook through their news and content but are not being adequately rewarded for those services. Carriage payments would have a minor impact on Facebook’s profits but a major impact on the prospects for publishers and journalists.”
2018 is going to be an interesting year. If you want to quit Facebook and/or Twitter be part of something better, why not join me on Mastodon via social.coop and help built Project MoodleNet?

Sources: Facebook newsroom / The Guardian / The Washington Post / News Corp

Where would your country be if the world was like Pangea?

I love this kind of stuff. As my daughter commented when I showed her, “we would be able to walk to Spain!”

The supercontinent of Pangea formed some 270 million years ago, during the Early Permian Period, and then began to break up 70 million years later, eventually yielding the continents we inhabit today. Pangea was, of course, a peopleless place. But if you were to drop today's nations on that great land mass, here's what it might look like.
Source: Open Culture

Amazon Go, talent and labour

I’ll try and explain what Amazon Go is without sounding a note of incredulity and rolling my eyes. It’s a shop where shoppers submit to constant surveillance for the slim reward of not having to line up to pay. Instead, they enter the shop by identifying themselves via the Amazon app on their smartphone, and their shopping is then charged to their account.

Ben Thompson zooms out from this to think about the ‘game’ Amazon is playing here:

The economics of Amazon Go define the tech industry; the strategy, though, is uniquely Amazon’s. Most of all, the implications of Amazon Go explain both the challenges and opportunities faced by society broadly by the rise of tech.
He goes on to explain that Amazon really really likes fixed costs, which is what their new store provides. Yes, R&D is expensive, but then afterwards you can predict your costs, and concentrate on throughput:
Fixed costs, on the other hand, have no relation to revenue. In the case of convenience stores, rent is a fixed cost; 7-11 has to pay its lease whether it serves 100 customers or serves 1,000 in any given month. Certainly the more it serves the better: that means the store is achieving more “leverage” on its fixed costs.

In the case of Amazon Go specifically, all of those cameras and sensors and smartphone-reading gates are fixed costs as well — two types, in fact. The first is the actual cost of buying and installing the equipment; those costs, like rent, are incurred regardless of how much revenue the store ultimately produces.

Just as Amazon built amazingly scalable server technology and then opened it out as a platform for others to build websites and apps upon, so Thompson sees Amazon Go as the first move in the long game of providing technology to other shops/brands.

In market after market the company is leveraging software to build horizontal businesses that benefit from network effects: in e-commerce, more buyers lead to more suppliers lead to more buyers. In cloud services, more tenants lead to great economies of scale, not just in terms of servers and data centers but in the leverage gained by adding ever more esoteric features that both meet market needs and create lock-in... [T]he point of buying Whole Foods was to jump start a similar dynamic in groceries.
Thompson is no socialist, so I had a little chuckle at his reference to Marx towards the end of the article:
The political dilemma embedded in this analysis is hardly new: Karl Marx was born 200 years ago. Technology like Amazon Go is the ultimate expression of capital: invest massive amounts of money up front in order to reap effectively free returns at scale. What has fundamentally changed, though, is the role of labour: Marx saw a world where capital subjugated labour for its own return; technologies like Amazon Go have increasingly no need for labor at all.
He does have a point, though, and reading Inventing the Future: Postcapitalism and a World Without Work convinced me that even ardent socialists should be advocating for full automation.

This is all related to points made about the changing nature of work by Harold Jarche in a new article he’s written:

As routine and procedural work gets automated, human work will be increasingly complex, requiring permanent skills for continuous learning and adaptation. Creativity and empathy will be more important than compliance and intelligence. This requires a rethinking of jobs, employment, and organizational management.
Some people worry that there won't be enough jobs to go around. However, the problem isn't employment, the problem is neoliberalism, late-stage capitalism, and the fact that 1% of people own more than 55% of the rest of the planet.

Sources: Stratechery and Harold Jarche

WTF is GDPR?

I have to say, I was quite dismissive of the impact of the EU’s General Data Protection Regulation (GDPR) when I first heard about it. I thought it was going to be another debacle like the ‘this website uses cookies’ thing.

However, I have to say I’m impressed with what’s going to happen in May. It’s going to have a worldwide impact, too — as this article explains:

For an even shorter tl;dr the [European Commission's] theory is that consumer trust is essential to fostering growth in the digital economy. And it thinks trust can be won by giving users of digital services more information and greater control over how their data is used. Which is — frankly speaking — a pretty refreshing idea when you consider the clandestine data brokering that pervades the tech industry. Mass surveillance isn’t just something governments do.

It’s a big deal:

[GDPR is] set to apply across the 28-Member State bloc as of May 25, 2018. That means EU countries are busy transposing it into national law via their own legislative updates (such as the UK’s new Data Protection Bill — yes, despite the fact the country is currently in the process of (br)exiting the EU, the government has nonetheless committed to implementing the regulation because it needs to keep EU-UK data flowing freely in the post-brexit future. Which gives an early indication of the pulling power of GDPR.
...and unlike other regulations, actually has some teeth:
The maximum fine that organizations can be hit with for the most serious infringements of the regulation is 4% of their global annual turnover (or €20M, whichever is greater). Though data protection agencies will of course be able to impose smaller fines too. And, indeed, there’s a tiered system of fines — with a lower level of penalties of up to 2% of global turnover (or €10M
I'm having conversations about it wherever I go, from my work at Moodle (an company headquartered in Australia) to the local Scouts.

Source: TechCrunch

Decentralisation 2.0

What this article calls ‘Decentralisation 2.0’ is actually redecentralising the web. There’s an urgent need:

A huge percentage of today’s communications flows through channels owned by a few entities, which in turn do all they can to influence these communications. Google alone comprises 25 percent of all US internet traffic right now, and has access to millions upon millions of users’ personal information. Where the internet was once seen as a tool for more societal freedom, it has come to represent the opposite.
The author takes aim at the so-called 'sharing economy' which, sonewhat paradoxically, actually entrenches centralisation, as companies like Airbnb and Uber exercise a lot of control over their platforms:
Counterintuitively, this is only possible because of a high degree of centralization: the company owns the identity of its participants, the transportation logistics, the payment mechanisms, the pricing, and the rules that govern the marketplace
The author has experience of bottom-up activism in Russia, usurping dominant players promoting unfair practices. I like his optimism about blockchain-based technologies. I don't necessarily share it, but we can hope:
True decentralization is fast approaching. Before long, we will see it in public administration, finance, real estate, insurance, transportation, and other key areas — often enabled by the blockchain technology. Its purpose is not to destroy centralized systems, but to create extra relationships on top of them. While maintaining the advantages of conventional platforms, decentralization 2.0 will reduce people’s dependence on mediators.

Source: The Next Web

First step

“You don’t have to see the whole staircase. Just take the first step.”

(Martin Luther King)

First step

“You don’t have to see the whole staircase. Just take the first step.”

(Martin Luther King)

The rise and rise of niche newsletters

Email is an open, federated standard. You can’t kill it.

The email inbox has become the modern day equivalent of the newsagent and offers a daily treasure trove of breaking news, analysis and inside information.

Newsletters, based on email, are a great bet for organisations, brands, and individuals looking to build an audience.

In 2011, The Financial Times asked “Is this the end of email?’ in an article highlighting the medium’s “inefficiency” as a business tool. Today, the FT serves its premium subscriber base with a portfolio of 43 email newsletters from “Brussels Briefing” to “FT Swamp Notes” (an insider’s guide to Donald Trump’s administration).

I very much enjoy publishing both this blog, and then curating the links into the weekly newsletter. I wish more people would do likewise!

Source: The Independent

The backstory of Apple's emoji

This is a lovely post, full of insights and humour. A designer, now at Google but originally an intern at Apple, talks about the first iterations of their emoji.

My favourite part:

Sometimes our emoji turned out more comical than intended and some have a backstory. For example, Raymond reused his happy poop swirl as the top of the ice cream cone. Now that you know, bet you’ll never forget. No one else who discovered this little detail did either.

A fantastic read, really made my day.

Source: Angela Guzman

The backstory of Apple's emoji

This is a lovely post, full of insights and humour. A designer, now at Google but originally an intern at Apple, talks about the first iterations of their emoji.

My favourite part:

Sometimes our emoji turned out more comical than intended and some have a backstory. For example, Raymond reused his happy poop swirl as the top of the ice cream cone. Now that you know, bet you’ll never forget. No one else who discovered this little detail did either.

A fantastic read, really made my day.

Source: Angela Guzman

Tribal politics in social networks

I’ve started buying the Financial Times Weekend along with The Observer each Sunday. Annoyingly, while the latter doesn’t have a paywall, the FT does which means although I can quote from, and link to, this article by Simon Kuper about tribal politics, many of you won’t be able to read it in full.

Kuper makes the point that in a world of temporary jobs, ‘broken’ families, and declining church attendance, social networks provide a place where people can find their ‘tribe’:

Online, each tribe inhabits its own filter bubble of partisan news. To blame this only on Facebook is unfair. If people wanted a range of views, they could install both rightwing and leftwing feeds on their Facebook pages — The Daily Telegraph and The Guardian, say. Most people choose not to, partly because they like living in their tribe. It makes them feel less lonely.
There's a lot to agree with in this article. I think we can blame people for getting their news mainly through Facebook. I think we can roll our eyes at people who don't think carefully about their information environment.

On the other hand, social networks are mediated by technology. And technology is never neutral. For example, Facebook has gone from saying that it couldn’t possibly be blamed for ‘fake news’ (2016) to investigating the way that Russian accounts may have manipulated users (2017) to announcing that they’re going to make some changes (2018, NSFW language in link).

We need to zoom out from specific problems in our society to the wider issues that underpin them. Kuper does this to some extent in this article, but the FT isn’t the place where you’ll see a robust criticism of the problems with capitalism. Social networks can, and have, been different — just think of what Twitter was like before becoming a publicly-traded company, for example.

My concern is that we need to sort out these huge, society-changing companies before they become too large to regulate.

Source: FT Weekend

Some advice for a happy family life

Last weekend, and on the day before The Guardian changed to a new, smaller format, Tim Lott, one of my favourite columnists, wrote his last article.

It contains “a few principles worth thinking about if you hope for a functional family life”. There’s some gems in the short article.

Be kind. If there is a simple secret to relationships, it is probably this. However, not too kind. You can do as much damage by being overindulgent as by being neglectful. Your children are your children, not your friends. Their positive judgment of you is good to have, but it is not a necessity.

Given our recurring conversations about whether or not to move to a bigger house, I found this reassuring:

Maintain intimacy. There are a number of practical methods for doing this. Don’t buy a big house. People are always trying to extend the size of their living spaces, but smaller spaces bring people together.

And then, as a parent of two strong-minded, wilful, but ultimately pleasant children, this also reassured me:

Finally, and perhaps most importantly – you’re not as powerful as you think. And you are going to fail as a parent – everyone does – but less than you imagine. Children are independent beings and make their own choices and interpretations. There’s culture, there’s nature, there’s nurture and there’s how each individual child chooses to interpret what’s coming at them. That last part, you have no control over. So don’t beat yourself up too much – or pat yourself on the back too much, either. You’re a fragile link in a long chain of causality.

Source: The Guardian

Issue #288: Socially and emotionally unavailable

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

A world without work

I’m not sure that just because you look at a screen all day means you’ve got a ‘bullshit job’, but this article nevertheless makes some good points:

Whether you look at a screen all day, or sell other underpaid people goods they can’t afford, more and more work feels pointless or even socially damaging – what the American anthropologist David Graeber called “bullshit jobs” in a famous 2013 article. Among others, Graeber condemned “private equity CEOs, lobbyists, PR researchers … telemarketers, bailiffs”, and the “ancillary industries (dog-washers, all-night pizza delivery) that only exist because everyone is spending so much of their time working”.
The best non-fiction book I read last year was Inventing the Future: Postcapitalism and a World Without Work by Nick Srnicek and Alex Williams. This is cited in the article, along with the left's preoccupation with the politics of organised work.
A large part of the left has always organised itself around work. Union activists have fought to preserve it, by opposing redundancies, and sometimes to extend it, by securing overtime agreements. “With the Labour party, the clue is in the name,” says Chuka Umunna, the centre-left Labour MP and former shadow business secretary, who has become a prominent critic of post-work thinking as it has spread beyond academia. The New Labour governments were also responding, Umunna says, to the failure of their Conservative predecessors to actually live up to their pro-work rhetoric: “There had been such high levels of unemployment under the Tories, our focus was always going to be pro-job.”
Instead, say those who advocate a 'post-work' future, we should be thinking beyond the way our physical and psychological environment is structured.
Town and city centres today are arranged for work and consumption – work’s co-conspirator – and very little else; this is one of the reasons a post-work world is so hard to imagine. Adapting office blocks and other workplaces for other purposes would be a huge task, which the post-workists have only just begun to think about. One common proposal is for a new type of public building, usually envisaged as a well-equipped combination of library, leisure centre and artists’ studios. “It could have social and care spaces, equipment for programming, for making videos and music, record decks,” says Stronge. “It would be way beyond a community centre, which can be quite … depressing.”
We get the future we deserve. So if we keep on doing the same old, same old when it comes to the way we organise work, we'll end up with the same kind of structures around it.

Source: The Guardian

Few wants

“Wealth consists not in having great possessions, but in having few wants.”

(Epictetus)

Film posters of the Russian avant-garde

I love the style of these posters, published in a new book to mark the centenary of the Russian Revolution.

So creative!

Source: i-D

Atlas of Hillforts

This makes me happy.

Back in 2013, archaeologists at Oxford and Edinburgh teamed up to work on the Atlas of Hillforts. Their four-year mission was identify every single hill fort in Britain and Ireland and their key features. This had never been done before, and as Oxford’s Prof. Gary Lock said it would allow archaeologists to “shed new light on why they were created and how they were used”.
Although prehistory is 'not my period' as an historian, I'm fascinated by it, and often incorporate looking for a hill fort during my mountain walks.
When the project was under development, Wikimedia UK was supporting a Wikimedian in Residence (WIR) at the British Library, Andrew Gray. He talked to the the people involved in the project and suggested using Wikipedia to share the results of the project. After all they were going to create a free-to-access online database. Perhaps the information could be used to update Wikipedia’s various lists of hillforts?
That data is now live. What a resource! The internet, and in particular working openly, is awesome.

Source: Wikipedia UK

Gendered AI?

Another fantastic article from Tim Carmody, a.k.a. Dr. Time:

An Echo or an iPhone is not a friend, and it is not a pet. It is an alarm clock that plays video games. It has no sentience. It has no personality. It’s a string of canned phrases that can’t understand what I’m saying unless I’m talking to it like I’m typing on the command line. It’s not genuinely interactive or conversational. Its name isn’t really a name so much as an opening command phrase. You could call one of these virtual assistants “sudo” and it would make about as much sense.

However.

I have also watched a lot (and I mean a lot) of Star Trek: The Next Generation. And while I feel pretty comfortable talking about “it” in the context of the speaker that’s sitting on the table across the room—there’s even a certain rebellious jouissance to it, since I’m spiting the technology companies whose products I use but whose intrusion into my life I resent—I feel decidedly uncomfortable declaring once and for all time that any and all AI assistants can be reduced to an “it.” It forecloses on a possibility of personhood and opens up ethical dilemmas I’d really rather avoid, even if that personhood seems decidedly unrealized at the moment.

I’m really enjoying his new ‘column’ as well as Noticing, the newsletter he curates.

Source: kottke.org

Imprisoned in prejudices

“The man who has no tincture of philosophy goes through life imprisoned in the prejudices derived from common sense, from the habitual beliefs of his age or his nation, and from convictions which have grown up in his mind without the co-operation or consent of his deliberate reason. To such a man the world tends to become definite, finite, obvious; common objects rouse no questions, and unfamiliar possibilities are contemptuously rejected.”

(Bertrand Russell)

Barely anyone uses 2FA

This is crazy.

In a presentation at Usenix's Enigma 2018 security conference in California, Google software engineer Grzegorz Milka today revealed that, right now, less than 10 per cent of active Google accounts use two-step authentication to lock down their services. He also said only about 12 per cent of Americans have a password manager to protect their accounts, according to a 2016 Pew study.
Two-factor authentication (2FA), especially the kind where you use an app authenticator is so awesome you can use a much weaker password than normal, should you wish. (I, however, stick to the 16-digit one created by a deterministic password manager.)
Please, if you haven't already done so, just enable two-step authentication. This means when you or someone else tries to log into your account, they need not only your password but authorization from another device, such as your phone. So, simply stealing your password isn't enough – they need your unlocked phone, or similar, to to get in.
I can't understand people who basically live their lives permanently one step away from being hacked. And for what? A very slightly more convenient life? Mad.

Source: The Register

Courage

“Life shrinks or expands according to one’s courage.”

(Anaïs Nin)

Using your phone wisely

I’m a big fan of The Book of Life, a project of The School of Life. One of the latest updates to this project is about the pervasive use of smartphones in society.

To say we are addicted to our phones is not merely to point out that we use them a lot. It signals a darker notion: that we use them to keep our own selves at bay. Because of our phones, we may find ourselves incapable of sitting alone in a room with our own thoughts floating freely in our own heads, daring to wander into the past and the future, allowing ourselves to feel pain, desire, regret and excitement.

I feel this. I want my mind to wander, but I also kind of want to be informed. I want to be entertained.

We have to check our phones of course but we also need to engage directly with others, to be relaxed, immersed in nature and present. We need to let our minds wander off of their own accord. We need to go through the threshold of boredom to renew our acquaintance with ourselves.

The diminutive digital assistants in our pockets do our bidding and unlock a multitude of possibilities.

Our phone, however, is docile, responsive to our touch, always ready to spring to life and willing to do whatever we want. Its malleability provides the perfect excuse for disengagement from the trickier aspects of other people. It’s almost not that rude to give it a quick check – just possibly we might actually need to keep track of how a news story is unfolding; a friend in another country may have just had a baby or someone we vaguely know might have bought a new pair of shoes in the last few minutes.

It’s a cliché to say that it’s the small things in life that make it worth living, but it’s true.

Our phones seem to deliver the world directly to us. Yet (without our noticing) they often limit the things we actually pay attention to. As we look down towards our palms we don’t realise we are forgetting:
  • The curious delicacy of a friend’s wrist
  • The soothing sound of traffic in the distance
  • Moss on an old stone wall
  • The pleasure of feeling tired after working hard
  • The excitement of getting up very early on a summer’s morning, in order to have an hour entirely to oneself.
  • A bank of clouds gradually drifting across the sky
  • The texture and smell and colour of a ripe fig
  • The shy hesitancy of someone’s smile
  • How nice it is to read in the bath
  • The comfort of an old jumper (with holes under the armpits)

Every technology is a ‘bridging’ technology in the sense of coming after something less sophisticated, and before something more sophisticated. My hope is that we iterate towards, rather than away, from what makes us human.

We are still so far from inventing the technology we really require for us to flourish; capitalism has delivered only on the simplest of our needs. We can summon up the street map of Lyons but not a diagram of what our partner is really thinking and feeling; the phone will help us follow fifteen news outlets but not help us know when we’ve spent more than enough time doing so; it emphatically refuses to distinguish between the most profound needs of our soul and a passing fancy.

As ever, a fantastic article.

Source: The Book of Life

The wilderness of intuition

“At times you have to leave the city of your comfort and go into the wilderness of your intuition. What you’ll discover will be wonderful. What you’ll discover is yourself.”

(Alan Alda)

Can you measure social and emotional skills?

Ben Williamson shines a light on the organisation behind the PISA testing regime moving into the realm of social and emotional skills:

The OECD itself has adopted ‘social and emotional skills,’ or ‘socio-emotional skills,’ in its own publications and projects. This choice is not just a minor issue of nomenclature. It also references how the OECD has established itself as an authoritative global organization focused specifically on cross-cutting, learnable skills and competencies with international, cross-cultural applicability and measurability rather than on country-specific subject achievement or locally-grounded policy agendas.

I really can’t stand this kind of stuff. Using proxies for the thing instead of trying to engender a more holistic form of education. It’s reductionist and instrumentalist.

This project exemplifies a form of stealth assessment whereby students are being assessed on criteria they know nothing about, and which rely on micro-analytics of their gestures across interfaces and keyboards. It appears likely that SSES, too, will involve correlating such process metadata with the OECD’s own SELS constructs to produce stealth assessments for quantifying student skills.

If you create data, people will use that data to judge students and rank them. Of course they will.

However, over time SSES could experience function creep. PISA testing has itself evolved considerably and gradually been taken up in more and more countries over different iterations of the test. The new PISA-based Test for Schools was produced in response to demand from schools. Organizations like CASEL are already lobbying hard for social-emotional learning to be used as an accountability measure in US education—and has produced a State-Scan Scorecard to assess each of the 50 states on SEL goals and standards. Even if the OECD resists ranking and comparing countries by SELS, national governments and the media are likely to interpret the data comparatively anyway.

This is not a positive development.

Source: Code Acts in Education

Bullet Journal like a Pro

The inimitable Cal Newport, he of Deep Work fame, turns his attention to Bullet Journals:

My main concern, however, is that this system, as traditionally deployed, cannot keep up with the complexity and volume of demands that define many modern knowledge work jobs, where the sheer volume of tasks you must juggle, or calendar events in a typical week, might overwhelm any attempt to exist entirely within a world of concise and neatly transcribed notebook pages.

Cal therefore recommends some modifications:

  • Introduce weekly plans
  • Time block daily plans
  • Maintain a deep work tally
  • Augment the notebook with a calendar and master task list
  • Integrate email
I might just try this!

Source: Cal Newport

Choose your connected silo

The Verge reports back from CES, the yearly gathering where people usually get excited about shiny thing. This year, however, people are bit more wary…

And it’s not just privacy and security that people need to think about. There’s also lock-in. You can’t just buy a connected gadget, you have to choose an ecosystem to live in. Does it work with HomeKit? Will it work with Alexa? Will some tech company get into a spat with another tech company and pull its services from that hardware thing you just bought?
In other words, the kind of digital literacies required by the average consumer just went up a notch.

Here’s the thing: it’s unlikely that the connected toothpaste will go back in the tube at this point. Consumer products will be more connected, not less. Some day not long from now, the average person’s stroll down the aisle at Target or Best Buy will be just like our experiences at futuristic trade shows: everything is connected, and not all of it makes sense.

It won't be long before we'll be inviting techies around to debug our houses...

Source: The Verge

Game-changing modular wheels

This is fantastic:

The Revolve is a full-size 26-inch spoked wheel that can be folded to a third its diameter and 60 percent less space, and back again in an instant, and its commercial availability will offer new design possibilities for folding bicycles, folding wheelchairs and many other vehicles that need to be transported in compact form.
A real game-change in terms of accessibility, I reckon.

Source: New Atlas

Game-changing modular wheels

This is fantastic:

The Revolve is a full-size 26-inch spoked wheel that can be folded to a third its diameter and 60 percent less space, and back again in an instant, and its commercial availability will offer new design possibilities for folding bicycles, folding wheelchairs and many other vehicles that need to be transported in compact form.
A real game-change in terms of accessibility, I reckon.

Source: New Atlas

The full complexity of life

“The point is… to live one’s life in the full complexity of what one is, which is something much darker, more contradictory, more of a maelstrom of impulses and passions, of cruelty, ecstasy, and madness, than is apparent to the civilised being who glides in the surface and fits smoothly into the world.”

(Thomas Nagel)

From Homer to texting and Twitter

I love everything about this post:

Jason eventually got me to see that “Ask Dr. Time” didn’t have to be an advice column in a conventional sense. What if readers had problems that didn’t require common sense or finely honed interpersonal skills, but an ability to make sense of abstruse reasoning? What if they didn’t need a fancy Watson but an armchair Wittgenstein? What if kottke.org hosted the first metaphysical advice columnist? That proposition is still absurd, but it’s absurd in an interesting way. And “absurd in an interesting way” is what Dr. Time is all about. Not practical solutions, but philosophical entanglements and disentanglings. That I could do.
Quoting from the introduction of Emily Wilson’s  translation of Homer’s The Odyssey:
Subsequent studies, building on the work of Parry and Lord, have shown that there are marked differences in the ways that oral and literate cultures think about memory, originality, and repetition. In highly literate cultures, there is a tendency to dismiss repetitive or formulaic discourse as cliche; we think of it as boring or lazy writing. In primarily oral cultures, repetition tends to be much more highly valued. Repeated phrases, stories, or tropes can be preserved to some extent over many generations without the use of writing, allowing people in an oral culture to remember their own past. In Greek mythology, Memory (Mnemosyne) is said to be the mother of the Muses, because poetry, music, and storytelling are all imagined as modes by which people remember the times before they were born.
In my doctoral thesis (and subsequent book), I talked about the work of Walter Ong and 'secondary orality', which Dr. Time also introduces here:
What Ong helped conceptualize and popularize, especially in his book Orality and Literacy, was that in cultures with no tradition of literacy, orality had a fundamentally different character from those where literacy was dominant. It’s different again in cultures where literacy is known but scarce.
Answering the question of whether texting and Twitter is a return to a more 'oral' form of communicating, Dr. Time answers in the negative:
The only form of genuine speech that’s genuinely visual and not auditory is sign language. And sign language is speech-like in pretty much every way imaginable: it’s ephemeral, it’s interactive, there’s no record, the signs are fluid. But even most sign language is at least in part chirographic, i.e., dependent on writing and written symbols. At least, the sign languages we use today: although our spoken/vocal languages are pretty chirographic too.

[…]

So tweets and text messages aren’t oral. They’re secondarily literate. Wait, that sounds horrible! How’s this: they’re artifacts and examples of secondary literacy. They’re what literacy looks like after television, the telephone, and the application of computing technologies to those communication forms. Just as orality isn’t the same after you’ve introduced writing, and manuscript isn’t the same after you’ve produced print, literacy isn’t the same once you have networked orality. In this sense, Twitter is the necessary byproduct of television.t

The author finally gets around to voice assistants such as Alexa and Siri towards the end. I’ve already quoted enough, so I encourage you to check it out in full.

Source: kottke.org

Would you be nuked?

In the light of the recent false alarm about the nuclear attack on her home of Hawaii, Amy Burvall shared this website in our Slack channel.

You can play about with it to find out what would be the effect of different sized nuclear bombs hitting somewhere near to you. I live in Morpeth, Northumberland, UK so, as you can see from the map below, although we may die from radiation poisoning, an attack on our nearest city of Newcastle-upon-Tyne wouldn’t flatten buildings here.

nukemap

Makes you think.

Source: NUKEMAP

Where did 'Å' come from?

I’m (sadly) pretty monolingual, but as an historian by training find things like this fascinating:

Regardless of who originally penned the idea, the new letter resulted from an unusual convergence: the Swedish Å owes its existence to a major religious reformation, a groundbreaking technological invention, the founding of a brand new nation, and the ever-flowing tide of phonetic evolution and language modernisation.

The post continues with a discussion of ‘diacritical marks’ used in other languages such as German and Czech. The author, who is also a type designer, has promised a follow-up post on uses of the letter ‘Å’ in contemporary typefaces.

Source: Frode Helland

Getting better at using tools

“Getting better at using tools comes to us, in part, when the tools challenge us, and this challenge often occurs just because the tools are not fit-for-purpose. They may not be good enough, or it’s hard to figure out how to use them. The challenge becomes greater when we are obliged to use these tools to repair or undo mistakes. In both creation and repair, the challenge can be met by adapting the form of a tool, or improvising with it as it is, using it in ways it was not meant for. However we come to use it, the very incompleteness of the tool has taught us something.

(Richard Sennett, The Craftsman)

Cool decentralisation resources from MozFest

I missed the Mozilla Festival at the end of October 2017 as I’d already booked my family holiday by the time they announced the dates.

It’s always a great event and attracts some super-smart people doing some great thinking and creating on and with the open web.

Mark Boas co-curated the Decentralisation Space at MozFest, and recently wrote up his experiences.

Sessions incorporated various types of media, from photography and other visual artforms, through board games to hand assembled systems made out of ping-pong balls and straws. Some discussions dove into the nitty gritty of decentralising the web, many required no prior knowledge of the subject.

His post, which mentions the session that was run by my co-op colleagues John Bevan and Bryan Mathers, is a veritable treasure trove of resources to explore further.

Source: maboa.it

This isn't the golden age of free speech

You’d think with anyone, anywhere, being able to post anything to a global audience, that this would be a golden age of free speech. Right?

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence? (Yes, there are systems that can create increasingly convincing fake videos.)
The problem is not with the free speech, it's the means by which it's disseminated:
In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms: Facebook, Google (which owns YouTube), and, to a lesser extent, Twitter.
It's time to re-decentralise, people.

Source: WIRED

Open source apps for agile project teams

A really interesting post about open source apps, most of which I’ve never come across!

In this list, there are no project management apps, no checklists, and no integrations with GitHub. Just simple ways to organize your thoughts and promote team communication.

Will be exploring with interest.

Source: opensource.com

Robo-advisors are coming for your job (and that's OK)

Algorithms and artificial intelligence are an increasingly-normal part of our everyday lives, notes this article, so the next step is in the workplace:

Each one of us is becoming increasingly more comfortable being advised by robots for everything from what movie to watch to where to put our retirement. Given the groundwork that has been laid for artificial intelligence in companies, it’s only a matter of time before the $60 billion consulting industry in the U.S. is going to be disrupted by robotic advisors.
I remember years ago being told that by 2020 it would be normal to have an algorithm on your team. It sounded fanciful at the time, but now we just take it for granted:
Robo-advisors have the potential to deliver a broader array of advice and there may be a range of specialized tools in particular decision domains. These robo-advisors may be used to automate certain aspects of risk management and provide decisions that are ethical and compliant with regulation. In data-intensive fields like marketing and supply chain management, the results and decisions that robotic algorithms provide is likely to be more accurate than those made by human intuition.
I'm kind of looking forward to this becoming a reality, to be honest. Let machines do what machines are good at, and humans do what humans are good at would be my mantra.

Source: Harvard Business Review

Opposite of manliness

“The opposite of manliness isn’t cowardice; it’s technology.” (Nassim Nicholas Taleb)

Thought Shrapnel #287: My bad

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Reasons to be cheerful

David Byrne, a talented musician and author of one of my favourite books, has started a great new project:

I imagine, like a lot of you who look back over the past year, it seems like the world is going to Hell. I wake up in the morning, look at the paper, and go, "Oh no!" Often I’m depressed for half the day. It doesn’t matter how you voted on Brexit, the French elections or the U.S. election—many of us of all persuasions and party affiliations feel remarkably similar.

As a kind of remedy and possibly as a kind of therapy, I started collecting good news that reminded me, “Hey, there’s actually some positive stuff going on!” Almost all of these initiatives are local, they come from cities or small regions who have taken it upon themselves to try something that might offer a better alternative than what exits. Hope is often local. Change begins in communities.

The website will include material that falls into some pre-defined categories:

These bits of good news tend to fall into a few categories:
  • Education
  • Health
  • Civic Engagement
  • Science/Tech
  • Urban/Transportation
  • Energy
  • Culture
I'm looking forward to following his progress. I'd prefer an RSS feed, but you can follow along on social media or (like me) sign up by email.

Source: Reasons to be Cheerful

Attention is an arms race

Cory Doctorow writes:

There is a war for your attention, and like all adversarial scenarios, the sides develop new countermeasures and then new tactics to overcome those countermeasures.

Using a metaphor from virology, he notes that we become to immune to certain types of manipulation over time:

When a new attentional soft spot is discovered, the world can change overnight. One day, every­one you know is signal boosting, retweeting, and posting Upworthy headlines like “This video might hurt to watch. Luckily, it might also explain why,” or “Most Of These People Do The Right Thing, But The Guys At The End? I Wish I Could Yell At Them.” The style was compelling at first, then reductive and simplistic, then annoying. Now it’s ironic (at best). Some people are definitely still susceptible to “This Is The Most Inspiring Yet Depressing Yet Hilarious Yet Horrifying Yet Heartwarming Grad Speech,” but the rest of us have adapted, and these headlines bounce off of our attention like pre-penicillin bacteria being batted aside by our 21st century immune systems.

However, the thing I’m concerned about is the kind of AI-based manipulation that is forever shape-shifting. How do we become immune to a moving target?

Source: Locus magazine

Barcelona to go open source by 2019

Great news for the open source community!

The City has plans for 70% of its software budget to be invested in open source software in the coming year. The transition period, according to Francesca Bria (Commissioner of Technology and Digital Innovation at the City Council) will be completed before the mandate of the present administrators come to an end in Spring 2019.

It also looks like it could be the start of a movement:

With this move, Barcelona becomes the first municipality to join the European campaign “Public Money, Public Code“.

It is an initiative of the Free Software Foundation of Europe and comes after an open letter that advocates that software funded publicly should be free. This call has been supported by more than about 15,000 individuals and more than 100 organizations.

Source: It’s FOSS

In a dark place

Last year, I remember being amazed by how black a new substance was that’s been created by scientists. Called Vantablack, it’s like a black hole for light:

Vantablack is genuinely amazing: It’s so good at absorbing light that if you move a laser onto it, the red dot disappears.

However, it turns out that Mother Nature already had that trick up her sleeve. Birds of Paradise have a similar ability:

A typical bird feather has a central shaft called a rachis. Thin branches, or barbs, sprout from the rachis, and even thinner branches—barbules—sprout from the barbs. The whole arrangement is flat, with the rachis, barbs, and barbules all lying on the same plane. The super-black feathers of birds of paradise, meanwhile, look very different. Their barbules, instead of lying flat, curve upward. And instead of being smooth cylinders, they are studded in minuscule spikes. “It’s hard to describe,” says McCoy. “It’s like a little bottle brush or a piece of coral.”

These unique structures excel at capturing light. When light hits a normal feather, it finds a series of horizontal surfaces, and can easily bounce off. But when light hits a super-black feather, it finds a tangled mess of mostly vertical surfaces. Instead of being reflected away, it bounces repeatedly between the barbules and their spikes. With each bounce, a little more of it gets absorbed. Light loses itself within the feathers.

Incredible.

Source: The Atlantic

How to build a consensual social network

Here’s another article that was linked to from the source of a post I shared recently. The paragraph quoted here is from the section entitled ‘Consent-Oriented Architecture’:

Corporations built to maximize profits are unable to build consensual platforms. Their business model depend fundamentally on surveillance and behavioral control. To build consensual platforms require that privacy, security, and anonymity be built into the platforms as core features. The most effective way to secure consent is to ensure that all user data and control of all user interaction resides with the software running on the user’s own computer, not on any intermediary servers.
Earlier in that section, the author makes the obvious (but nevertheless alarming point) that audiences are sorted and graded as commodities to be bought and sold:
Audiences, like all commodities, are sold by measure and grade. Eggs are sold in dozens as grade A, for example. An advertisers might buy a thousand clicks from middle-aged white men who own a car and have a good credit rating.
In a previous section, the author notes that those who use social networks are subjects of an enclosed system:
The profits of the media monopolies are formed after surplus value has already been extracted. Their users are not exploited, but subjected, captured as an audience, and instrumentalized to extract surplus profits from other sectors of the ownership class.
I had to read some sections twice, but I'm glad I did. Great stuff, and very thought-provoking.

In short, to ensure Project MoodleNet is a consensual social network, we need to ensure full transparency and, if possible, that the majority of the processing of personal data is done on the user’s own device.

Source: P2P Foundation

Bigger the dream...

“The bigger the dream, the more important the team.” (Robin Sharma)

Money in, blood out

A marvellous post by Ryan Holiday, who is well versed in Stoic philosophy:

Seneca, the Roman statesman and writer, spoke often about wealthy Romans who have spent themselves into debt and the misery and dependence this created for them. Slavery, he said, often lurks beneath marble and gold. Yet, his own life was defined by these exact debts. With his own fortune, he made large loans to a colony of Britain at rates so high it eventually destroyed their economy. And what was the source of this fortune? The Emperor Nero was manipulatively generous with Seneca, bestowing upon him numerous estates and monetary awards in exchange for his advice and service. Seneca probably could have said no, but after he accepted the first one, the hooks were in. As Nero grew increasingly unstable and deranged, Seneca tried to escape into retirement but he couldn’t. He pushed all the wealth into a pile and offered to give it back with no luck.

Eventually, death—a forced suicide—was the only option. Money in, blood out.

You need to know what you stand for in life so you can politely decline those things that don’t mesh with your expectations and approach to life. This takes discipline, and discipline takes practice.

Source: Thought Catalog

Venture Communism?

As part of my Moodle work, I’ve been looking at GDPR and decentralised technologies, so I found the following interesting.

It’s worth pointing out that ‘disintermediation’ is the removal of intermediaries from a supply chain. Google, Amazon, Facebook, Microsoft, and Apple specialise in ‘anti-disintermediation’ or plain old vendor lock-in.  So ‘counter-anti-disintermediation’ is working against that in a forward-thinking way.

Central to the counter-anti-disintermediationist design is the End-to-End principle: platforms must not depend on servers and admins, even when cooperatively run, but must, to the greatest degree possible, run on the computers of the platform’s users. The computational capacity and network access of the users’ own computers must collectively make up the resources of the platform, such that, on average, each new user adds net resources to the platform. By keeping the computational capacity in the hands of the users, we prevent the communication platform from becoming capital, and we prevent the users from being instrumentalized as an audience commodity.
The great thing about that, of course, is that solutions such as ZeroNet allow for this, in a way similar to bitorrent networks ensuring more popular content becomes more available.

The linked slides from that article describe ‘venture communism’, an approach characterised by co-operative control, open federated systems, and commons ownership. Now that’s something I can get behind!

Source: P2P Foundation

Fake amusement park

This made me smile:

The show is called “Fake Theme Parks” and it debuts Friday, January 12 at Gallery 1988 in Los Angeles. Fifty artists created a huge variety of work based on parks from TV, movies, video games, and more.

[…]

Itchy & Scratchy Land, Krustyland, and Duff Gardens are from The Simpsons; Anatomy Park is from Rick and Morty; Brisbyland is from Venture Bros.; Arctic World is from Batman Returns; Funland is from Scooby-Doo; Monkey Island is from a game of the same name (by Lucasarts); Walley World is from Vacation; and Pacific Playland (not pictured) is from Zombieland.

I have unlimited love for the Monkey Island series of games. So much so that I’m afraid that if I replayed them as an adult I’d destroy part of my remembered youth.

Fun fact: Ron Gilbert, the creator of the first two Monkey Island games, wrote a blog post a few years ago about how he would approach making a new version. He’s not going to, though, sadly.

Source: io9

Questions to ask before taking your next job

This is a fantastic resource for those who are thinking about their next move. Increasingly, it’s less about a one-way fit of you being right for the organisation, and as much about the organisation fitting you.

With job interviews lasting only a few hours, it is very difficult to know what a new role will be like before you accept a job offer. You must put in some work to get as clear of a picture as possible.

Just as your potential employers will evaluate you, I recommend intentionally thinking through what metrics and questions you’ll use to evaluate them. Here’s how.

Really good advice, and as someone who’s just started a new job, I agree that these are exactly the questions you need to get right.

Source: Quartz

Dreamers who do

“The world needs dreamers and the world needs doers. But above all, the world needs dreamers who do.” (Sarah Ban Breathnach)

Deliberate rest, cognitive momentum, and differentiated work hours

Appropriately enough, it was during a lunchtime run that I listened to the latest episode of Jocelyn K. Glei’s excellent podcast. It featured Alex Pang, writer and futurist, on the benefits of rest for the creative process.

He talked about a number of things, but it confirmed my belief that you can only really do four hours of focused, creative work per day. Of course, you can add status-update meetings and emails to that, but the core of anyone’s work should be this sustained, disciplined period of attention.

Four really concentrated hours are sufficient to do one’s most critical work, they’re sufficient to do really good work, and for whatever reason they seem to be the physical limit that most of us have.
In addition, he introduced terms such as 'deliberate rest' and 'cognitive momentum' which I'll definitely be using in future. A highly recommended listen.

Source: Hurry Slowly

You get paid what other people think you're worth

Great post by Seth Godin:

Yes, we frequently sell ourselves too short. We don't ask for compensation commensurate with the value we create. It's a form of hiding. But the most common form of this hiding is not merely lowering the price. No, the mistake we make is in not telling stories that create more value, in not doing the hard work of building something unique and worth seeking out.

Create stuff that people value and that is in scarce supply. Focus on leaving the world a better place than you found it.

Source: Seth’s blog

Meltdown and Spectre explained by xkcd

There’s not much we mere mortals can do about the latest microprocessor-based vulnerabilites, except ensure we apply security patches immediately.

Source: xkcd

Meaningless work causes depression

As someone who has suffered in the past from depression, and still occasionally suffers from anxiety, I find this an interesting article:

If you are depressed and anxious, you are not a machine with malfunctioning parts. You are a human being with unmet needs. The only real way out of our epidemic of despair is for all of us, together, to begin to meet those human needs – for deep connection, to the things that really matter in life.
Meaningful work is important. Our neoliberal economy is removing much of this under the auspices of 'efficiency'.

Source: The Guardian

It doesn't matter if you don't use AI assistants if everyone else does

Email is an awesome system. It’s open, decentralised, and you can pick whoever you want to provide your emails. The trouble is, of course, that if you decide you don’t want a certain company, say Google, to read your emails, you only have control of your half of the equation. In other words, it doesn’t matter if you don’t want to use GMail, if most of your contacts do.

The same is true of AI assistant. You might not want an Amazon Echo device in your house, but you don’t spend all your life at home:

Amazon wants to bring Alexa to more devices than smart speakers, Fire TV and various other consumer electronics for the home, like alarm clocks. The company yesterday announced developer tools that would allow Alexa to be used in microwave ovens, for example – so you could just tell the oven what to do. Today, Amazon is rolling out a new set of developer tools, including one called the “Alexa Mobile Accessory Kit,” that would allow Alexa to work Bluetooth products in the wearable space, like headphones, smartwatches, fitness trackers, other audio devices, and more.
The future isn't pre-ordained. We get to choose the society and culture in which we'd like to live. Huge, for-profit companies having listening devices everywhere sounds dystopian to me.

Source: TechCrunch

Thought Shrapnel #286: New beginnings

The latest issue of the newsletter hit inboxes earlier today!

💥 Read

🔗 Subscribe

Social media short-circuits democracy

I’m wondering whether to delete all my social media accounts, or whether I should stay and fight. The trouble is, no technology is neutral, it always contains biases.

It’s interesting how the narrative has changed since the 2011 revolutions in Iran and Egypt:

Because of the advent of social media, the story seemed to go, tyrants would fall and democracy would rule. Social media communications were supposed to translate into a political revolution, even though we don’t necessarily agree on what a positive revolution would look like. The process is overtly emotional: The outrage felt translates directly, thanks to the magic of social media, into a “rebellion” that becomes democratic governance.

But social media has not helped these revolutions turn into lasting democracies. Social media speaks directly to the most reactive, least reflective parts of our minds, demanding we pay attention even when our calmer selves might tell us not to. It is no surprise that this form of media is especially effective at promoting hate, white supremacy, and public humiliation.

In my new job at Moodle, I’m tasked with leading work around a new social network for educators focused on sharing Open Educational Resources and professional development. I think we’ll start to see more social networks based around content than people (think Pinterest rather than Facebook).

Source: Motherboard

Spain is on the wrong timezone

As an historian, I find this fascinating:

So why are Spaniards living behind their geographic time zone?

In 1940, General Francisco Franco changed Spain’s time zone, moving the clocks one hour forward in solidarity with Nazi Germany.

For Spaniards, who at the time were utterly devastated by the Spanish Civil War, complaining about the change did not even cross their minds. They continued to eat at the same time, but because the clocks had changed, their 1pm lunches became 2pm lunches, and they were suddenly eating their 8pm dinners at 9pm.

We were talking over Sunday dinner today how some traditions and practices can stick within families and organisations without them being questioned for years. This is an extreme example!

Source: BBC Travel

Foucault understood the power of ambiguity

To have a settled position on anything is anachronistic. There has to be an element of ambiguity in your work and thinking, otherwise you’re dealing in what Richard Rorty called ‘dead metaphors’.

Foucault understood this by never espousing a theory of power:

Herein lies the richness and the challenge of Foucault’s work. His is a philosophical approach to power characterised by innovative, painstaking, sometimes frustrating, and often dazzling attempts to politicise power itself. Rather than using philosophy to freeze power into a timeless essence, and then to use that essence to comprehend so much of power’s manifestations in the world, Foucault sought to unburden philosophy of its icy gaze of capturing essences. He wanted to free philosophy to track the movements of power, the heat and the fury of it working to define the order of things.
By not spending time defending your own position, you have time to recognise and critique what you see you be wrong and insidious in the world:
Foucault’s skeptical supposition thus allowed him to conduct careful enquiries into the actual functions of power. What these studies reveal is that power, which easily frightens us, turns out to be all the more cunning because its basic forms of operation can change in response to our ongoing efforts to free ourselves from its grip.
I'm reading China Miéville's October: The Story of the Russian Revolution at the moment. It's making me re-realise that power is never given, it's always taken.

Source: Aeon

Fridays are a social construct

I feel like I could have written this post. I agree entirely:

Some of the phenomena governing people's schedules are natural. It does get dark at night and people do need light. It gets cold in the winter and people need heating. But the Earth does not care whether it's the weekday or the weekend, a Wednesday or a Saturday. And yet somehow the society has decreed that Wednesday is a serious business day and any adult roaming the streets during daytime on that day might get weird stares.
As the author points out, knowledge work doesn't depend on people doing it at the same time. In fact, the title of his post is 'Against the synchronous society':
Perhaps there's no need for people in the workplace to expect others to be able to instantly respond to them. In fact, slower, asynchronous communication can lead to more robust institutional memory inside of an organisation. Instead of the easy fix of tapping a colleague on the shoulder to get an answer, the worker might instead devise a solution for an issue themselves or figure it out while typing up an email, adding to the documentation and making sure fewer people have that question in the future.
Great stuff. I, for one, am looking forward to a time when we're collectively spend a bit more time reflecting, and a bit less time (knee-jerk) responding.

As an aside, the software running the blog, Kimonote, looks interesting:

Kimonote is a fancy plain text organizer, a macroblogging platform and an antisocial network. It supports Markdown, which allows for a consistent look-and-feel no matter whether you're looking at your own private notes or someone else's public posts. Additional niceties are available, such as a table of contents.
Source: mildbyte

Privacy-based browser extensions

I visit Product Hunt on a regular basis. While there’s plenty of examples of hyped apps and services that don’t last six months, there’s also some gems in there, especially in the Open Source section!

There’s a Q&A part of the site where this week I unearthed a great thread about privacy-based browser extensions. The top ones were:

The comments and shared experiences are particularly useful. Remember, the argument that you don’t need privacy because you’ve got nothing to hide is like saying you don’t need free speech because you’ve got nothing to say…

Source: Product Hunt

Twitter isn't going to ban Trump, no matter what

Twitter have confirmed what everyone knew all along: they’re not going to ban Donald Trump, no matter what he says or does. It’s too good for business.

Blocking a world leader from Twitter or removing their controversial Tweets would hide important information people should be able to see and debate. It would also not silence that leader, but it would certainly hamper necessary discussion around their words and actions.

It’s a weak, cowardly argument to infer that if Twitter doesn’t provide a platform for Trump, then someone else will. This is absolutely about their growth, absolutely about the fact they make software with shareholders.

Source: Twitter blog

Image via CNN

Charisma instead of hierarchy?

An interesting interview with Fred Turner, former journalist, Stanford professor, and someone who spends a lot of time studying the technology and culture of Silicon Valley.

Turner likens tech companies who try to do away with hierarchy to 1960s communes:

When you take away bureaucracy and hierarchy and politics, you take away the ability to negotiate the distribution of resources on explicit terms. And you replace it with charisma, with cool, with shared but unspoken perceptions of power. You replace it with the cultural forces that guide our behavior in the absence of rules.
It's an interesting viewpoint, and one which chimes with works such as The Tyranny of Structurelessness. I still think less hierarchy is a good thing. But then I would say that, because I'm a white, privileged western man getting ever-closer to middle-age...

Source: Logic magazine

Education is about the journey, not the destination

I’m a big fan of Cathy Davidson, and look forward to reading her new book. In this article, she explains that we’ve unleashed an ‘educational monster’ by forcing students to be memorisers rather than content creators:

Increasingly, we are shrinking educational opportunities for our youth worldwide, robbing them of the creativity of the arts, the critical thinking of the humanities and social sciences, and reducing all knowledge to test scores, despite repeated workforce studies stressing the importance of deep learning. The trend is to use standardised tests as the entrance to university and therefore to a middle-class future, even though we have ample research, extending back to the Hermann Ebbinghaus memory experiments of the 1880s, about the evanescence of knowledge crammed for the purpose of test-taking.
As ever with Cathy's writing, it's a good and well-researched read. I'm not sure about framing it in terms of 'outcomes-based' education, however, as judging people by outcomes in the workplace is generally seen as a good thing. Perhaps emphasise that the journey is more important than the destination? That's why granular badges within a portfolio are a great alternative to letter grades and high-stakes testing.

Source: The Guardian

Mozilla is creating an Open Leadership Map

The Mozilla Foundation may have shut down pretty much all of its learning programmes, but it’s still doing interesting stuff around Open Leadership. Chad Sansing writes:

We think of Open Leadership as a set of principles, practices, and skills people can use to mobilize their communities to solve shared problems and achieve shared goals. For example, Mozilla’s web browser, Firefox, was developed with an open code base with community contribution and support.
They're using the Web Literacy Map (work I led during my time with Mozilla) as a reference point. It's early days, but here's what they've got so far:

Open Leadership MapThere’s also a white paper which they say will be updated in February 2018. I’m looking forward to seeing where this goes. Along with great work being done at opensource.com’s community around The Open Organization it’s a great time to be a open leader!

Source: Read, Write, Participate

Life in likes

England’s Children’s Commissioner has released a report entitled ‘Life in Likes’ which has gathered lots of attention in my networks. This, despite the fact that during the research they only talked to only 32 children. I used to teach over 250 kids a week! 32 is a class size, not a representative sample.

This article includes quotations from parents, such as this one:

Parent Trevor said his 12-year-old twin daughters had moved schools as a result of the pressure from social media, but admits they "can't walk away" from it.

He told BBC Radio 5 live: “I can’t say to them, ‘You can’t use that,’ when I use it."

Yes you can. My kids see me drink alcohol but it doesn’t mean I let them have it. My son has a smartphone with an app lock on the Google Play store so he can’t install apps without my permission.

The solution to this stuff does involve basic digital skills, but mainly what’s lacking here are parenting skills, I think.

Source: BBC News

Dark kitchens, dark factories... is this the future of automation?

I missed this at the end of last year, perhaps because I live in a small town in the north of England rather than a bustling metropolis:

Welcome to the world of ‘dark kitchens’ – fully-equipped commercial kitchens like you’d find attached to a restaurant, except with no restaurant or even a takeaway counter. Also known as virtual kitchens, they are dedicated solely to meeting the ever-growing hunger for online delivery services, facilitated by the likes of third party delivery apps.

These kitchens are anything but dark at peak times such as Friday or Saturday night, as noodles, pizza, curries and much more exotic—and increasingly, healthy—fare is sizzled up on a made-to-order basis while drivers for food delivery platforms such as Just Eat, Deliveroo, Seamless, and Uber Eats wait outside.

Incredible and obvious at the same time.

Source: The Times

Capitalism can make you obese

From a shocking photojournalism story:

With imported soft drinks costing the same or less than bottled water, in a country where tap water is not safe to drink, the poorest people are most likely to develop diabetes. Mexico’s health ministry said in 2016 that 72% of adults were overweight or obese. But the same people are prone to malnutrition thanks to a diet high in sugar and saturated fats and low in fibre
Source: The Guardian

It's not advertising, it's statistical behaviour-modification

The rest of this month’s WIRED magazine is full of its usual hubris, but the section on ‘fixing the internet’ is actually pretty good. I particularly like Jaron Lanier’s framing of the problem we’ve got with advertising supporting the online economy:

Something has gone very wrong: it's the business model. And specifically, it's what is called advertising. We call it advertising, but that name in itself is misleading. It is really statistical behaviour-modification of the population in a stealthy way. Unlike [traditional] advertising, which works via persuasion, this business model depends on manipulating people's attention and their perceptions of choice. Every single penny Facebook makes is from doing that and 90 per cent of what Google makes is from doing that. (Only a small minority of the money that Apple, Microsoft and Amazon makes is from doing that, so this should not be taken as a complete indictment of big tech.)
Source: WIRED

How to prevent being 'cryptojacked'

The Opera web browser has joined Brave in allowing users to turn on ‘cryptojacking’ protection:

Bitcoins are really hot right now, but did you know that they might actually be making your computer hotter? Your CPU suddenly working at 100 percent capacity, the fan is going crazy for seemingly no reason and your battery quickly depleting might all be signs that someone is using your computer to mine for cryptocurrency.
For a very short period of time around five years ago I 'cryptojacked' visitors to my blog using JavaScript. Back then, Bitcoin was worth so little, and the slowdown for visitors was so great, that I soon turned it off.

Given the recent explosive rise in Bitcoin’s value, however, it would seem that cryptojacking is yet another thing to guard against online…

Source: Opera blog

Fred Wilson's predictions for 2018

Fred Wilson is author of the incredibly popular blog AVC. He prefaces his first post of the year in the following way:

This is a post that I am struggling to write. I really have no idea what is going to happen in 2018.
He does, however, go on to answer ten questions, the most interesting of which are those he answers in the affirmative:
  • Will the tech backlash that I wrote about yesterday continue to escalate? Yes.
  • Will we see more gender and racial diversity in tech? Yes.
  • Will Trump be President at the end of 2018. Yes.
I picked up a copy of WIRED magazine at the airport yesterday for the flight home. (I used to subscribe, but it annoyed me too much.) It is useful, though, for taking the temperature of the tech sector. Given there were sections on re-distributing the Internet, the backlash against the big four tech companies, and diversity in tech, I think they're likely to be amongst the big trends for the (ever-widening) tech sector 2018.

Source: AVC

Albert Wenger's reading list

Albert Wenger, a venture capitalist and author of World After Capital, invited his (sizeable) blog readership to suggest some books he should read over his Christmas and New Year’s break. The results are interesting, as there’s a mix of technical, business, and more discursive writing.

The ones that stood out for me were:

Former Mozilla colleague John O'Duinn has just sent out Update #14 of his Leading Distributed Teams ebook, so I'm looking forward to reading that soon, too!

Source: Continuations

Data-driven society: utopia or dystopia?

Good stuff from (Lord) Jim Knight, who cites part of his speech in the House of Lords about data privacy:

The use of data to fuel our economy is critical. The technology and artificial intelligence it generates has a huge power to enhance us as humans and to do good. That is the utopia we must pursue. Doing nothing heralds a dystopian outcome, but the pace of change is too fast for us legislators, and too complex for most of us to fathom. We therefore need to devise a catch-all for automated or intelligent decisioning by future data systems. Ethical and moral clauses could and should, I argue, be forced into terms of use and privacy policies.

Jim’s a great guy, and went out of his way to help me in 2017. It’s great to have someone with his ethics and clout in a position of influence.

Source: Medium

Are social networks a public health issue?

I think the author’s correct to frame things in terms of addiction:

Because we are all hooked, it can be hard to recognise your social media habits as problematic. The closest I came to an “aha” moment was during a visit to Facebook’s headquarters at One Hacker Way, Palo Alto, in 2014, when I worked in advertising. Hearing its sales executives explain how much data Facebook had on its users, all the ways it could target people and get them to click on ads, was terrifying. I haven’t posted a personal update on Facebook since. The moment you start thinking about Facebook as a surveillance system rather than a social network, it becomes a lot more difficult to hand it your information.
It's easy to think that 'keeping up-to-date' is part of your job, therefore constant use of Facebook, LinkedIn and Twitter is justified. I can tell you after going pretty much cold turkey on the latter in 2017 that's not true.
Reducing my social media habit didn’t make me more productive – I am very talented at finding ways to waste time. However, it did make me see how little value Facebook added to my life. Choosing to opt out of the constant noise, to reclaim my attention, was a massive relief. I stopped comparing myself with others so much and started to feel a lot happier with my life. It also reduced my anxiety levels. In today’s news cycle, the endless stream of breaking news, amplified by social media, can easily break your spirit.
Source: The Guardian

Commit to improving your security in 2018

We don’t live in a cosy world where everyone hugs fluffy bunnies who shoot rainbows out of their eyes. Hacks and data breaches affect everyone:

If you aren’t famous enough to be a target, you may still be a victim of a mass data breach. Whereas passwords are usually stored in hashed or encrypted form, answers to security questions are often stored — and therefore stolen — in plain text, as users entered them. This was the case in the 2015 breach of the extramarital encounters site Ashley Madison, which affected 32 million users, and in some of the Yahoo breaches, disclosed over the past year and a half, which affected all of its three billion accounts.
Some of it isn't our fault, however. For example, you can bypass PayPal's two-factor authentication by opting to answer questions about your place of birth and mother's maiden name. This is not difficult information for hackers to obtain:
According to Troy Hunt, a cybersecurity expert, organizations continue to use security questions because they are easy to set up technically, and easy for users. “If you ask someone their favorite color, that’s not a drama,” Mr. Hunt said. “They’ll be able to give you a straight answer. If you say, ‘Hey, please download this authenticator app and point the camera at a QR code on the screen,’ you’re starting to lose people.” Some organizations have made a risk-based decision to retain this relatively weak security measure, often letting users opt for it over two-factor authentication, in the interest of getting people signed up.
Remaining secure online is a constantly-moving target, and one that we would all do well to spend a bit more time thinking about. These principles by the EFF are a good starting point for conversations we should be having this year.

Source: The New York Times

To 'quit' isn't necessarily the opposite of having 'grit'

This is a useful way of framing things:

“Quit” doesn’t have to be the opposite of “grit.” This is where “strategic quitting” comes in. Once you’ve found something you’re passionate about, quitting secondary things can be an advantage, because it frees up time to do that number-one thing.
As someone who burned out in their twenties, I definitely agree with the sentiment that time is more important than money:
When we choose an extra hour at work, we are, in effect, choosing one less hour with our kids. We can’t do it all and do it well. And there will not be more time later. Time does not equal money, because we can get more money.
Although I'll be doing some consultancy in 2018, my main focus is on the work I'm doing for Moodle. I've been careful to establish boundaries to ensure that work is sustainable: I'm working four days per week, and I'm doing that based from home.

By my calculations, that gives me 13 hours more ‘free’ time than if was working in an office in my nearest city. It all adds up!

Source: Fast Company

Now are the Olympics

“And if anything laborious, or pleasant or glorious inglourious be presented to you, remember that now is the contest, now are the Olympic games, and they cannot be deferred; and that it depends on one defeat and one giving way that progress is either lost or maintained. Socrates in this way became perfect, in all things improving himself, attending to nothing except to reason. But you, though you are not yet a Socrates, ought to live as one who wishes to be a Socrates.” (Epictetus)

How to run an Open Source project

Although I don’t use elementaryOS on my own laptops, we do use it on the family touchscreen PC in our main living space. It’s a beautifully-designed system, and I very much appreciate way the founders interact with their community in terms of updates, roadmap, and funding:

Every month this year, we’ve published a blog post outlining all of the updates that we’ve released during that month. We’ve made a strong effort to support Loki with regular bug fixes, new features, and other improvements. We’ve also made some big policy and infrastructure changes. It was a busy year at elementary!
This is the kind of thing I'm looking to emulate with Project MoodleNet in 2018. 

With their upcoming ‘Juno’ update based on Ubuntu 18.04, I may just switch to elementaryOS, as ‘Loki’ was good enough for me to voluntarily pay $25 for it, in an age when even proprietary operating systems are ‘free’ 

Source: elementaryOS blog

The internet needs distributed DNS

This article talks about hyperlinks, because that’s what mainstream audiences understand, but the issue is the internet’s Domain Name System (DNS):

Domain Name System (DNS) servers power every hyperlink. They rapidly translate the text of a dotcom address into numbers that can then pinpoint the root server and map the precise locations of every every web page, every image, video, file -- no matter where it is worldwide.

Good DNS services speed up web sites, balance traffic loads and protect against a wide spectrum of cyber threats. Bad DNS makes sites slow and unstable and makes it easy for criminals to change the address of links on a web page to their malware.

Source: ZDNet

Succeeding with innovation projects

There’s some great advice in this article for those, like me, who are leading innovation projects in 2018:

Your role is to make noise around the idea so that potential stakeholders are excited to learn more about it. At this stage, it’s really important to reach out to key individuals within the company and ask for advice, so you can more easily establish an affiliation between them and the new activity, and cultivate a community of internal supporters.
Source: TNW

Facebook is an instrument of the state

This should not surprise us:

Facebook now seems to be explicitly admitting that it also intends to follow the censorship orders of the U.S. government.
Many people get the majority of their news through Facebook, so censorship isn't just banning someone from a scoail network, it has an impact on the social and democratic life of nation states:
What this means is obvious: that the U.S. government — meaning, at the moment, the Trump administration — has the unilateral and unchecked power to force the removal of anyone it wants from Facebook and Instagram by simply including them on a sanctions list. Does anyone think this is a good outcome? Does anyone trust the Trump administration — or any other government — to compel social media platforms to delete and block anyone it wants to be silenced?
Source: The Intercept

The importance of downtime

There’s a few books I read every morning, on repeat. One of them, Daily Rituals, details the everyday working lives of famous writers, painters, composers, and other well-known figures.

I was reading about Charles Darwin earlier this week, and the author of this article has a book that’s sitting waiting for me to read back at home:

Figures as different as Charles Dickens, Henri Poincaré, and Ingmar Bergman, working in disparate fields in different times, all shared a passion for their work, a terrific ambition to succeed, and an almost superhuman capacity to focus. Yet when you look closely at their daily lives, they only spent a few hours a day doing what we would recognize as their most important work. The rest of the time, they were hiking mountains, taking naps, going on walks with friends, or just sitting and thinking. Their creativity and productivity, in other words, were not the result of endless hours of toil. Their towering creative achievements result from modest “working” hours.
The author also references John Lubbock who was, apparently, one of the best-known authors of his time:
So despite their differences in personality and the different quality of their achievements, both Darwin and Lubbock managed something that seems increasingly alien today. Their lives were full and memorable, their work was prodigious, and yet their days are also filled with downtime.

This looks like a contradiction, or a balance that’s beyond the reach of most of us. It’s not.

I’ve often sais that four hours of focused knowledge work is the maximum every day. Factor in emails, meetings, and admin, and the daily routine of figures such as Darwin’s seems abiut right.

Source: Nautilus

Culture eats strategy for breakfast

A collection of articles on organisational culture from the Harvard Business Review. I need to examine them in more depth, but the diagram above and paragraph below jumped out at me.

Whereas some cultures emphasize stability—prioritizing consistency, predictability, and maintenance of the status quo—others emphasize flexibility, adaptability, and receptiveness to change. Those that favor stability tend to follow rules, use control structures such as seniority-based staffing, reinforce hierarchy, and strive for efficiency. Those that favor flexibility tend to prioritize innovation, openness, diversity, and a longer-term orientation.
Source: Harvard Business Review

Caulfield's predictions for 2018

Some good stuff in Mike Caulfield’s “somewhat U.S.-centric predictions” for the coming year. In particular:

Creation of pro-government social media army focused domestically. My most out-there prediction. President Trump will announce the creation of a "Fake News Commission" to investigate both journalists and social media. One finding of the committee will be that the U.S. needs to emulate other countries and create an army of social media users to seek out anti-government information and "correct" it.
In other words, a 21st-century version of McCarthyism.

Source: Traces

Image: Washington Post, 1954 (via Spartacus Educational)

The best album covers of 2017

It was only last week that I was telling my children how they’d missed out on the joy of exploring CD inserts to find detailed information on tracks and random artwork.

This post gives 20 examples of great artwork from albums that came out in 2017. I do like Beck’s album, and not just because it’s got a badge-shaped cover:

Speaking of his creation, album cover artist Jimmy Turrell said that Beck commissioned both him and Steve Stacey to create the entire visual representation of his latest album. Packed full of bold colour, Turrell says he and Stacey looked back to their youth for inspiration, considering what stimulated them visually as kids. The Deluxe Vinyl edition allows fans to remove and change pieces to create their own bespoke cover.
My favourite from 2017? Morrissey's Low in High School, which I've used as the featured image for this post.

Source: Creative Bloq

Moving down Maslow's hierarchy of needs using OER?

David Wiley, the standard bearer for Open Educational Resources, says:

Many of us believe that education is an incredibly powerful tool in the fight to increase equity, and this is a primary motivation for our participation in the open education movement. The shared core of the work we do in open education is increasing access to educational opportunity – with the long-term goal of making access to that opportunity truly universal – by licensing educational resources in ways that make them free and 5R-able. That is, by creating, sharing, and improving OER.
However...
In general, without a stable basic needs floor to stand on you aren’t capable of benefitting from access to educational opportunity – including those opportunities made possible by our collective efforts in open education. And unfortunately, as long as basic needs problems persist, those whose basic needs are not being met will be essentially incapable of taking advantage of the opportunities created by OER, while those whose basic needs are being met will be capable of taking advantage of those opportunities. Consequently, while basic needs issues persist, OER will likely expand some of the gaps we intend for it to shrink.
I can't tell whether he's covering his back or advocating for full communism now.

Source: iterating toward openness

Image: CC BY Atelier Disko, Hamburg und Berlin

Potentially huge wind farm proposed in the North Sea

Dogger Bank, which thousands of years ago as Doggerland would have been visible from the North East of England where I live, is the proposed site for a huge new wind farm complex with a central island power hub.

To accommodate all the equipment, the island would take up around 5-6 sq km, about a fifth the size of Hayling Island in the English Channel.

While the actual engineering challenge of building the island seems enormous, Van der Hage is not daunted. “Is it difficult? In the Netherlands, when we see a piece of water we want to build islands or land. We’ve been doing that for centuries. That is not the biggest challenge,” he said.

The short YouTube video is pretty cool.

Source: The Guardian

Few posessions

“A wise man needs few things to make him happy; nothing can satisfy a fool. That is why nearly all men are wretched.” (François de La Rochefoucauld)

Few posessions

“A wise man needs few things to make him happy; nothing can satisfy a fool. That is why nearly all men are wretched.” (François de La Rochefoucauld)

Few possessions

“A wise man needs few things to make him happy; nothing can satisfy a fool. That is why nearly all men are wretched.” (François de La Rochefoucauld)

Is that you, Mother?

Umm…

Several studies have found that, on average, there’s some physical similarity between one’s parent and one’s partner. That is, your girlfriend might well look a little bit like your mother. This physical similarity is apparent whether you ask strangers to compare facial photos of partners and parents, or whether you assess things such as parent and partner height, hair or eye colour, ethnicity, or even body hair.
Perhaps it's an evolutionary thing?
A wonderful study of all known couples in Iceland across a 165-year period found that those with the most grandchildren were related at about the level of third or fourth cousin – no more, no less. So it seems there is some evolutionary advantage to finding traces of parental features attractive.
Source: Aeon

How do you show off your privilege when everyone's got an iPhone?

It uses to be all about conspicuous consumption and bling…

However, the democratisation of consumer goods has made them far less useful as a means of displaying status. In the face of rising social inequality, both the rich and the middle classes own fancy TVs and nice handbags. They both lease SUVs, take airplanes, and go on cruises. On the surface, the ostensible consumer objects favoured by these two groups no longer reside in two completely different universes.
It's all about buying organic produce and privacy these days:
Today’s inconspicuous consumption is a far more pernicious form of status spending than the conspicuous consumption of Veblen’s time. Inconspicuous consumption – whether breastfeeding or education – is a means to a better quality of life and improved social mobility for one’s own children, whereas conspicuous consumption is merely an end in itself – simply ostentation. For today’s aspirational class, inconspicuous consumption choices secure and preserve social status, even if they do not necessarily display it.
Source: Aeon

Lunatics

All are lunatics, but he who can analyse his delusion is called a philosopher (Ambrose Bierce)

How to defuse remote work issues

Good advice here about resolving difficulties with a remote co-worker.

When it comes to delivering feedback, use the same formula that you would in any other feedback situation. First, provide crisp and clear observations of your teammate’s behavior as free of judgment and subjectivity as possible. (For example, instead of “you were rude to me,” try “when you interrupted me as I tried to be heard over the phone…”) Second, describe the impact of the person’s behavior. Phrase the impact as your reaction or impression, not as the objective truth. (“When you talked over me when I was on the conference call, I felt like you don’t respect what I have to say.”) Finally, ask an open-ended question that engages your teammate in a dialogue and helps you to understand one another’s perceptions. (“How did you perceive that call when you were in the meeting room?”) Don’t stop until you each have a clear vision for how a similar situation could play out better the next time.
Working remotely is great, but it can be an emotional rollercoaster.
Most of us avoid or delay uncomfortable conversations even with people who sit beside us. It’s natural to dislike confrontation. Now imagine how easy it is to let concerns fester when your teammate is two time zones away. Avoiding an important conversation is a bad idea with an office mate and an even worse idea with a virtual teammate. Get the issues out in the open as quickly as possible before they sour your relationship and affect your ability to get the job done.
Source: Harvard Business Review

The benefits of decentralised decision-making

I’m not sure I agree with the conclusions of this article, as I don’t agree with the (made-up) premises. At least it begins well:

As Henry Mintzberg noted in The Structuring of Organizations in 1979, “The words centralization and decentralization have been bandied about for as long as anyone has cared to write about organizations.” And that is a pretty long time, at least since 400 B.C., when Jethro advised Moses to distribute responsibility to various levels in the hierarchy.
The author, a 'strategic advisor', introduces four qualities he claims most managers wabt. I'd question this, and certainly 'perennity' which I think he'd be better off replacing with 'resilience'. In fact, the whole article, by the time you get to the end, seems to be an attempt to explain why decentralisation is a bad idea. But then, he would say that.
In an age where the concept of “self-managed organization” attracts much attention, the question of centralization versus decentralization does not go away. Nicolai Foss and Peter Klein argue in the article “Why Managers Still Matter” that “In today’s knowledge-based economy, managerial authority is supposedly in decline. But there is still a strong need for someone to define and implement the organizational rules of the game.”
The trouble is, I think the rules of the game may have changed.

Source: Harvard Business Review

It's called Echo for a reason

That last-minute Christmas gift sounds like nothing but unadulterated fun after reading this, doesn’t it?

It is a significant thing to allow a live microphone in your private space (just as it is to allow them in our public spaces). Once the hardware is in place, and receiving electricity, and connected to the Internet, then you’re reduced to placing your trust in the hands of two things that unfortunately are less than reliable these days: 1) software, and 2) policy.

Software, once a mic is in place, governs when that microphone is live, when the audio it captures is transmitted over the Internet, and to whom it goes. Many devices are programmed to keep their microphones on at all times but only record and transmit audio after hearing a trigger phrase—in the case of the Echo, for example, “Alexa.” Any device that is to be activated by voice alone must work this way. There are a range of other systems. Samsung, after a privacy dust-up, assured the public that its smart televisions (like others) only record and transmit audio after the user presses a button on its remote control. The Hello Barbie toy only picks up and transmits audio when its user presses a button on the doll.

Software is invisible, however. Most companies do not make their code available for public inspection, and it can be hacked, or unscrupulous executives can lie about what it does (think Volkswagen), or government agencies might try to order companies to activate them as a surveillance device.

I sincerely hope that policy makers pay heed to the recommendations section, especially given the current ‘Wild West’ state of affairs described in the article.

Source: ACLU

Your New Year's resolution for 2018? Ditch Facebook.

If something’s been pre-filtered by Cory Doctorow and Jason Kottke then you know it’s going to be good. Sure enough, the open memo, to all marginally-smart people/consumers of internet “content” by Foster Kamer, is right on the money:

Literally, all you need to do: Type in web addresses. Use autofill! Or even: Google the website you want to go to, and go to it. Then bookmark it. Then go back every now and again.

Instead of reading stories that get to you because they’re popular, or just happen to be in your feed at that moment, you’ll read stories that get to you because you chose to go to them. Sounds simple, and insignificant, and almost too easy, right?

On our flight yesterday, my son asked how I was still reading articles on my phone, despite it being in aeroplane mode. I took the opportunity to explain to him how RSS powers feed readers (I use and pay for Feedly) as well as podcasts.

This stuff sounds obvious and easy when you’ve grown up with the open web. But given that the big five tech companies seem to be trying to progressively de-skill consumers, we shouldn’t be complacent.

By going to websites as a deliberate reader, you're making a conscious choice about what you want a media outlet to be—as opposed to letting an algorithm choose the thing you're most likely to click on. Or! As opposed to encouraging a world in which everyone is suckered into reading something with a headline optimized by a social media strategist armed with nothing more than "best practices" for conning you into a click.
Kamer blames Facebook, and given its impact on the news ecosystem, he's correct in doing so:
Their goal, as a company, is to keep you on Facebook—and away from everything else—as long as they possibly can. They do that by making Facebook as addictive to you as possible. And they make it addictive by feeding you only the exact stripe of content you want to read, which they know to a precise, camel-eye-needle degree. It's the kind of content, say, that you won't just click on, but will "Like," comment on, and share (not just for others to read, but so you can say something about yourself by sharing it, too). And that's often before you've even read it!
It's a great read. Why not start by adding Thought Shrapnel's RSS feed to your shiny new feed reader? There's plenty to choose from!

Source: Mashable

The Horizon stops here

Audrey Watters is delightfully blunt about the New Media Consortium, known for their regular ‘Horizon reports’, shutting down:

While I am sad for all the NMC employees who lost their jobs, I confess: I will not mourn an end to the Horizon Report project. (If we are lucky enough, that is, that it actually goes away.) I do not think the Horizon Report is an insightful or useful tool. Sorry. I recognize some people really love to read it. But perhaps part of the problem that education technology faces right now – as an industry, as a profession, what have you – is that many of its leaders believe that the Horizon Report is precisely that. Useful. Insightful.
Source: Hack Education

Put a number next to someone's name and there will be pressure for it to increase

In her review of Daniel Koretz’s new book on testing in schools, Diane Ravitch reminds us of Campbell’s law:

In 1979, the psychologist Donald Campbell proposed an axiom. “The more any quantitative social indicator is used for social decision-making,” he wrote, “the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”
Ravitch applies this to high-stakes testing in school, using a story from Soviet Russia to bring the point home:
The classic (and probably apocryphal) illustrations of Campbell’s law come from the Soviet Union. When workers were told that they must produce as many nails as possible, they produced vast quantities of tiny and useless nails. When told they would be evaluated by the weight of the nails, they produced enormous and useless nails. The lesson of Campbell’s law: Do not attach high stakes to evaluations, or both the measure and the outcome will become fraudulent.
High stakes testing in schools is pernicious, Ravitch writes:
The children from elite homes are convinced by their test scores that they deserve their high status; their scores demonstrate their superiority. And children of the poor learn early on that they rank poorly; their test scores confirm their lowly status.
Source: New Republic

Does it take Trump to make badges go mainstream?

Perversely, it might take something like the Trump administration to make Open Badges work at scale. Why? Because Republicans don’t trust Higher Education:

Is support for higher ed fragmenting along political lines? It is if you believe the recent Pew poll showing Republicans’ distrust of higher ed is growing relative to Democrats (on a nearly 2-to-1 margin) is not fake news... In any case, look for Trump’s Department of Education to push on the trend toward more “practical” vocational learning and not just apprenticeships. Higher Ed Act proposals this year may push to open up federal financial aid beyond the credit-hour.
Things, of course, are different in the US to the rest of the world. In Europe I think we've always had a different, and more positive, relationship to vocational education.

Source: Education Design Lab

How to get people to pay you what you're worth

Good advice in this article for people who (like me) are asked regularly whether someone can ‘pick your brain’.

If you decide you do want to give advice, do it on your terms. If they ask to meet for coffee and you don’t have time, send an email instead. If they ask a question that requires a novel-length answer, address one part of it, or send them some helpful links. Don’t fear being explicit that you didn’t have time to answer in full by saying something like: “Thank you for reaching out. Your question requires an answer that I unfortunately do not have time to fully address due to my work. However, you might find the following books/links/thinkers/YouTube videos helpful.”
Given I live in the back of beyond, most of my initial meetings are online, which makes life easier. I give people 30 minutes for free, and then that sometimes leads to them asking me to put together a proposal for them.

What I particularly like about this article is that it encourages readers to find ways to give back to their sector / profession:

Once you’ve created boundaries around when and where you’ll provide help on demand, you can begin looking for other, more expansive avenues for giving back. This can include devoting your time speaking on panels, at schools/universities, on podcasts, or at workshops for free if it’s a cause or audience that would benefit from your knowledge. Though beware of the requests that can often follow on from such engagements, and refer to the third step when answering them.
One thing that's not mentioned in this article that I've found can work is if you offer a 'critical friend' service. This is basically billing them for a day's work at your regular rate, from which they can draw down time for advice when they need it.

Source: Quartz at Work

Building a home online

I discovered ‘John Henry’, the pseudonymous author of this blog, after finding and sharing another post from him earlier this week. He makes a good point in this one about building a home online.

Digitally, I am living in a hotel. Rented space. I can't change the furniture, the furnishing are not mine, if I drink the water it costs me $6.00 per bottle.

It is peaceful in a sterilized, ephemeral way. The next day, I will be gone, and the cleaners will wipe any trace of my existence.

It’s hard to disagree with his metaphor of our life online feeling about as cosy as Eeyore’s house:

In 2002, a site called Myspace was launched, promising you your own space. It was a lie, and it failed. This was the Eeyore Era of home-building, and we haven't progressed much since then.
Source: Clutch of the Dead Hand

Purely technological answers to human problems don't work

In a hugely surprising move, Facebook has found that marking an article as ‘disputed’ on a user’s news feed and putting a red flag next to it makes them want to click on it more. 🙄

The tech giant is doing this in response to academic research it conducted that shows the flags don't work, and they often have the reverse effect of making people want to click even more. Related articles give people more context about what's fake or not, according to Facebook.
The important thing is what comes next:
Facebook's Sheryl Sandberg says Facebook is a technology company that doesn't hire journalists. Without using editorial judgement to determine what's real and what's not, tackling fake news will forever be a technology experiment.
Until Facebook is forced to admit it's a media comoany, and is regulated as such, we'll continue to have these problems around technological solutionism.

Source: Axios

Nobody likes a goody two-shoes

This is an incredible entry in the School of Life’s Book of Life:

The sickness of the good child is that they have no experience of other people being able to tolerate their badness. They have missed out a vital privilege accorded to the healthy child; that of being able to display envious, greedy, egomaniacal sides and yet be tolerated and loved nevertheless.
I know, and have know, plenty of people who are amazing exam-takers and are fantastic at doing what society expects of them. Unfortunately, that's not a great preparation for when life throws you curveballs.
At work, the good adult has problems too. As a child, they follow the rules; never make trouble and take care not to annoy anyone. But following the rules won’t get you very far in adult life. Almost everything that’s interesting, worth doing or important will meet with a degree of opposition. A brilliant idea will always disappoint certain people – and yet very much be worth holding on to. The good child is condemned to career mediocrity and sterile people-pleasing.
As a parent of two strong-willed and feisty children, there's plenty to ponder here.

Source: The Book of Life

Life in the outrage economy

Rafael Behr nails it when he says we live in an ‘outrage economy’:

Rage is contagious. It spreads from one sweaty digital crevice to the next, like a fungal infection. It itches like one too. When sitting at the keyboard, it is difficult to perceive wrongness without wanting to scratch it with a caustic retort. But that provides no sustained relief. One side’s scratch is the other side’s itch.
I'm just back from watching Star Wars: The Last Jedi. It's an incredible film with plenty of social commentary. The Rebel Alliance is outraged at what the First Order is doing, just as we're outraged with the order of our society, created by elites.
An outrage economy is lucrative only in an outraged society. Once stoked, the anger becomes self-sustaining, addictive. There is a physiological gratification in rage – a primitive adrenal response that overrides more sophisticated emotions. It can be perversely comforting. Politicised anger feels virtuous. It is the kick of moral purpose, but conveniently stripped of any obligation to consider nuance or alternative perspectives. Hatred of a proposition, or a party, removes interest in understanding why others like it. Self-righteous anger is an excuse not to even try to persuade. St Augustine’s invitation to “love the sinner, hate the sin” does not have much purchase on Twitter.
Perhaps we need to 'use the force' and come into a bit more balance, both individually and as a society. After all, more outrage just feeds the whole edifice from which the bad guys prosper.

Source: The Guardian

Howard Rheingold on cooperation as a solution to our present woes

Howard Rheingold is one of the smartest and most colourful people I’ve ever met. One of his books, Net Smart, was very useful to me while writing my thesis, and I’ve followed his work for a while now.

That’s why I’m delighted that he’s commenting on our current predicament around the technology that connects our society. He’s suggesting some ways forward — including platform co-operatives.

Questions about the threats of technology often come down to the nature of capitalism: The microtargetted advertising that makes Facebook a conduit for hyperpersonalized propaganda is precisely what makes Facebook such a valuable medium for paid advertising — which is what returns profit to Facebook’s stockholders. So what can be done about that? Some argue that because communism failed, there is no alternative remedy. Yet we are seeing potential alternatives beginning to emerge: while platform cooperativism and profit-from-purpose businesses are relatively new, successful cooperative corporations have existed for more than a century. What other models can be added to this list? Can any central principles or points of leverage be inductively derived by examining these alternatives.
Source: Howard Rheingold

We're still figuring out what it means for everyone to be connected

Part of what’s happened over the last 18 months can be attributed to us just getting used to having daily interactions with people around the world. I started doing that 20+ years ago as a teenager, so it’s difficult to imagine what that must be like if you haven’t grown up with the increasing power of connectivity.

Rick Webb points out that the view that we’d automatically be better, connected, might just be incorrect.

We are biological organisms with thousands of years of evolution geared towards villages of 100, 150 people. What on earth made us think that in the span of a single generation, after a couple generations in cities with lots of people around us but wherein we still didn’t actually know that many people, that we could suddenly jump to a global community? If you think about it, it’s insanity. Is there any evidence our brains and hearts can handle it? Has anyone studied it at all?

It’s quite possible that the premise is completely false. And I’m not sure we ever considered for a moment that it could be wrong.

Source: NewCo Shift

GDPR could break the big five's monopoly stranglehold on our data

Almost everyone has one or more account with the following companies: Apple, Amazon, Facebook, Google, and Microsoft. Between them they know more about you than your family and the state apparatus of your country, combined.

However, 2018 could be the year that changes all that, all thanks to the General Data Protection Regulation (GDPR), as this article explains.

There is legitimate fear that GDPR will threaten the data-profiling gravy train. It’s a direct assault on the surveillance economy, enforced by government regulators and an army of class-action lawyers. “It will require such a rethinking of the way Facebook and Google work, I don’t know what they will do,” says Jonathan Taplin, author of Move Fast and Break Things, a book that’s critical of the platform economy. Companies could still serve ads, but they would not be able to use data to target someone’s specific preferences without their consent. “I saw a study that talked about the difference in value of an ad if platforms track information versus do not track,” says Reback. “If you just honor that, it would cut the value Google could charge for an ad by 80 percent.”
If it was any other industry, these monolithic companies would already have been broken up. However, they may be another, technical, way of restricting their dominance: forcing them to be interoperable so that users can move their data between platforms.
Portability would break one of the most powerful dynamics cementing Big Tech dominance: the network effect. People want to use the social media site their friends use, forcing startups to swim against a huge tide. Competition is not a click away, as Google’s Larry Page once said; the costs of switching are too high. But if you could use a competing social media site with the confidence that you’ll reach all your friends, suddenly the Facebook lock gets jimmied open. This offers the opportunity for competition on the quality and usability of the service rather than the presence of friends.
Source: The American Prospect

What would a version of Maslow's Hierarchy of Needs for society look like?

I like the notion put forward by Susan Wu in this article — although Maslow’s framework was actually based on co-operation, so re-thinking it as a dynamic hierarchy may be all that’s required:

Perhaps it's time for an updated version of Maslow’s hierarchy of needs, one that underscores what’s essential not just for individuals to flourish, but for the greater good of society. Startups and management executives universally invoke this theory as an accepted canon for framing the human problems they’re trying to solve.

The problem is that Maslow’s framework pertains to individual, not societal, well-being. The reality is that individual needs cannot be met without the social cohesion of belonging, connectedness, and symbiotic networks. A revised design focused on a thriving civilization would have at its root empathy and ethics, and acknowledge that if inequality continues to grow at its current pace, societal well-being becomes impossible to achieve.

Source: WIRED

Decentralised projects to explore in 2018

This is a great post, giving an overview of lots of projects focusing on the decentralisation of technology we use everyday, as well as that which underpins it:

It's becoming gradually clearer that the Facebook-Google-Amazon dominated internet (what André Staltz calls the Trinet) is weighing down society, our economy, and our political system. From US congressional hearings in November over Russian social media influence, to increasing macroeconomic concern about productivity and technology monopolization, to bubbling user dissatisfaction with digital walled gardens, forces are brewing to make 2018 a breakout year for contenders looking to shape the Web in the service of human values, opposed to the values of the increasingly attention-grubby advertising industry.
Source: Clutch of the Dead Hand

Photo: NASA

Brexit Britain means food prescriptions on the NHS

I cannot believe this is happening in my country as we prepare to enter 2018. Food banks and developments like these are born of political choice, not economic necessity.

As reported in The Independent earlier this month, food poverty in Britain is contributing to an increase in Victorian illnesses like rickets and stunted growth in children.

More than 60 per cent of paediatricians believe food insecurity contributed to the ill health among children they treat, according to a 2017 survey by the Royal College of Paediatricians and Child Health.

Dr George Grimble, a medical scientist at University College London, said food poverty was “disastrous” for a child’s development, resulting in nutritional deficits, obesity and squandered potential.

Source: The Independent

What to tell your kids about Santa Claus

My kids, who are ten and six years of age respectively, blatantly don’t believe in Father Christmas. After leaving out a mince pie and glass of whisky last night, they asked this morning whether I’d enjoyed it!

As a church-going family, it’s never been a huge deal as to whether Santa Claus is literally real. Christmas isn’t really about a guy in a red suit furtively climbing down an impossible number of chimneys.

What to tell your children, and when to admit that Father Christmas doesn’t really exist, is still awkward, however. Although there’s a twinkle in my eye when I talk to them about ‘him’, I still haven’t admitted that it’s really me filling the up the stockings at the end of their bed each year.

In this article, Maria Popova quotes the cultural anthropologist Margaret Mead, who read her own children stories about Santa Claus legends from many different countries. The difference between ‘literal’ and ‘poetic’ truth is an important one. Especially this year. And particularly at Christmas.

Disillusionment about the existence of a mythical and wholly implausible Santa Claus has come to be a synonym for many kinds of disillusionment with what parents have told children about birth and death and sex and the glory of their ancestors. Instead, learning about Santa Claus can help give children a sense of the difference between a “fact” — something you can take a picture of or make a tape recording of, something all those present can agree exists — and poetic truth, in which man’s feelings about the universe or his fellow men is expressed in a symbol.
Source: Brain Pickings

2018: the year of Linux on the desktop?

There’s a perpetual joke in open source circles that next year will be ‘the year of Linux on the desktop’. GNU/Linux, of course, is an operating system that comes in a range of ‘distributions’ (I use Ubuntu and Elementary OS on a range of devices).

In this article, the author outlines 10 reasons that Linux isn’t used by more people. I think he’s spot-on:

  1. Fragmented market
  2. Lack of special applications
  3. Lack of big name applications
  4. Lack of API and ABI stability
  5. Apple resurgence
  6. Microsoft aggressive response
  7. Piracy
  8. Red Hat mostly stayed away
  9. Canonical business model not working out
  10. Original device manufacturer support
That being said, I'm all-in on Linux now. I can't imagine going back to the vendor lock-in provided by macOS, Windows, or Chrome OS.

Source: Christian F.K. Schaller

What you read determines who you are

Shane Parrish from Farnam Street has written an ‘annual letter’ to his audience, much like his hero Warren Buffett. I particularly liked this section:

The people you spend time with shape who you are. As Goethe said, “tell me with whom you consort and I will tell you who you are.” But Goethe didn’t know about the internet. It’s not just the people you spend your time with in person who shape you; the people you spend time with online shape you as well.

Tell me what you read on a regular basis and I will tell you what you likely think. Creepy? Think again. Facebook already knows more about you than your partner does. They know the words that resonate with you. They know how to frame things to get you to click. And they know the thousands of people who look at the same things online that you do.

When you’re reading something online, you’re spending time with someone. These people determine our standards, our defaults, and often our happiness.

Every year, I make a point of reflecting on how I’ve been spending my time. I ask myself who I’m spending my time with and what I’m reading, online and offline. The thread of these questions comes back to a common core: Is where I’m spending my time consistent with who I want to be?

Source: Farnam Street Blog

The immorality of retaining wealth

The image I’ve chosen for this post came via social.coop rather than the article cited, but it does indicate where non-inherited wealth comes from. This wealth is then often used for investment or speculation that then becomes unearned income.

I like the way that the author frames things in terms of how much people retain, rather than how much they earn:

Note that this is a slightly different point than the usual ones made about rich people. For example, it is sometimes claimed that CEOs get paid too much, or that the super-wealthy do not pay enough in taxes. My claim has nothing to do with either of these debates. You can hold my position and simultaneously believe that CEOs should get paid however much a company decides to pay them, and that taxes are a tyrannical form of legalized theft. What I am arguing about is not the question of how much people should be given, but the morality of their retaining it after it is given to them.
Also, I like the idea of a 'maximum moral income':
We can define something like a “maximum moral income” beyond which it’s obviously inexcusable not to give away all of your money. It might be 5o thousand. Call it 100, though. Per person. With an additional 50 allowed per child. This means two parents with a child can still earn $250,000! That’s so much money. And you can keep it. But everyone who earns anything beyond it is obligated to give the excess away in its entirety. The refusal to do so means intentionally allowing others to suffer, a statement which is true regardless of whether you “earned” or “deserved” the income you were originally given. (Personally, I think the maximum moral income is probably much lower, but let’s just set it here so that everyone can agree on it. I do tend to think that moral requirements should be attainable in practice, and a $30k threshold would actually require people experience some deprivation whereas a $100k threshold indisputably still leaves you with an incredibly comfortable lifestyle better than almost any other had by anyone in history.)
Source: Current Affairs

Sticks and stones

This article, originally given as a lecture, focuses on the worrying fact that we no longer seem to know how to disagree with one another any more. I’ve certainly witnessed this with the ‘hive mind’ on social networks, who are outraged if anyone so much as questions what keyboard warriors see as sacred tenets. 

In other words, to disagree well you must first understand well. You have to read deeply, listen carefully, watch closely. You need to grant your adversary moral respect; give him the intellectual benefit of doubt; have sympathy for his motives and participate empathically with his line of reasoning. And you need to allow for the possibility that you might yet be persuaded of what he has to say.
I subscribe to the view that we should have strong opinions, weakly held. In other words, we shouldnl be neither embarrassed nor reticent to say what we think, but we should be ready to change our mind. This is why the EU 'right to be forgotten' legislation is so important. We grow up, emotionally, physically, and intellectually.
There’s no one answer. What’s clear is that the mis-education begins early. I was raised on the old-fashioned view that sticks and stones could break my bones but words would never hurt me. But today there’s a belief that since words can cause stress, and stress can have physiological effects, stressful words are tantamount to a form of violence. This is the age of protected feelings purchased at the cost of permanent infantilization.
Source: The New York Times

Blockchains are boring

The author of this article works in finance and describes himself as “whatever the opposite of a futurist is”. He does, however, make some decent points, even though he may be a little short-sighted:

In the end, the advantages of the existing human and software systems surrounding transactions — from verifying identity with a driver’s license to calling and clarifying the statements made in a credit disputed transaction to automatically billing your credit card for a newspaper subscription — outweigh the purported benefits, as well as hidden costs, of irrevocable, automated execution. Blockchain enthusiasts often act as if the hard part is getting money from A to B or keeping a record of what happened. In each case, moving money and recording the transaction is actually the cheap, easy, highly-automated part of a much more complex system.
Source: hackernoon

The upside of kids watching Netflix instead of TV

In our house, on the (very) rare occasions we’re watching live television that includes advert breaks, I mute the sound and do a humourous voice-over…

With more homes than ever becoming ‘Netflix Only’ homes, we wanted to see how many hours of commercials kids in these homes are being spared. We were able to determine that kids in ‘Netflix Only’ homes are saved from just over 230 hours of commercials a year when compared to traditional television viewership homes.
Source: Exstreamist

Human Extinction

Via Audrey Watters, this is incredible. Read the whole thing; capitalism can’t, and won’t, save us:

Encourage the buying of Coca-Cola soda with polar bears on the cans to raise awareness..

Corporations partner with environmental non-profits. Coca-Cola launches “Arctic White for Polar Bears.”

Source: Motherboard

How to be a consultant

I stumbled across this via Hacker News. This guy basically explains how consulting works, with some great advice. Here’s three parts that stood out for me:

This is, far and away, the most important lesson to learn as a consultant. People who are unsavvy about business, like me in 2009 or like most freelancers today, treat themselves like commodity providers of a well-understood service that is available in quantity and differentiated purely based on price. This is stunningly not the case for programming, due to how competitive the market for talent is right now, and it is even more acutely untrue for folks who can program but instead choose to offer the much-more-lucrative service "I solve business problems -- occasionally a computer is involved."
I don't actually think this just a programming thing, and although I'm no longer in a position to be able to hire myself out on a weekly basis, the following approach sounds sensible:
If you quote hourly rates rather than weekly rates, that encourages clients to see you as expensive and encourages them to take a whack at your hourly just to see if it sticks. Think of anything priced per hour. $100 an hour is more than that costs, right? So $100 per hour, even though it is not a market rate for e.g. intermediate Ruby on Rails programmers, suddenly sounds expensive. Your decisionmaker at the client probably does not make $100 an hour, and they know that. So they might say "Well, the economy is not great right now, we really can't do more than $90." That isn't objectively true, the negotiator just wants to get a $10 win... and yet it costs you 10% of your income.
I always mean to ask for case studies, but never get around to it. He explains why it works:
I always ask to follow a successful consulting engagement with a case study. My pitch is "This is a mutual win: you get a bit more exposure and I get a feather in my cap, for landing the next client." Case studies of successful projects with some of my higher profile consulting clients (like e.g. Fog Creek) helped me to get other desirable consulting clients. Very few clients turn down free publicity, particularly if you offer to do all the work in arranging it.
Source: Patrick McKenzie

Ethical business means fair pay (and co-ownership?)

Partly a marketing move, for sure, but this move to ethical business is encouraging. See also Buffer’s transparent salary calculator. The next move for companies like this would be for employees to be co-owners.

Starting 2018, Basecamp is paying everyone as though they live in San Francisco and work for a software company that pays in the top 10% of that market (compared to base pay + bonus, but not options).

We don’t actually have anyone who lives in San Francisco, but now everyone is being paid as though they did. Whatever an employee pockets in the difference in cost of living between where they are and the sky-high prices in San Francisco is theirs to keep.

This is not how companies normally do their thing. I’ve been listening to Adam Smith’s 1776 classic on the Wealth of Nations, and just passed through the chapter on how the market is set by masters trying to get away with paying the least possible, and workers trying to press for the maximum possible. An antagonistic struggle, surely.

It doesn’t need to be like that. Especially in software, which is a profitable business when run with restraint and sold to businesses.

Source: Signal v. Noise

Reputation on the dark net

I know someone who lives in London and gets weed delivered through his letterbox from the dark net with Amazon-like efficiency. So I can entirely believe this write-up:

The three key traits of trustworthiness—competence, reliability, and honesty—also apply to drug vendors. To highlight reliability, many reviews point out the speed of response and delivery. For example, “I ordered 11.30 a.m. yesterday and my package was in my mailbox in literally twenty-five hours. I’ll definitely be back for more in the future,” commented a buyer on Silk Road 2.0. One of the ways skills and knowledge are reviewed is how good a vendor’s “stealth” is, that is, how cleverly they disguise their product so that it doesn’t get detected. “Stealth was so good it almost fooled me,” wrote a satisfied buyer on an MDMA listing on the AlphaBay market. Established vendors are very good at making it look (and smell) like any old regular package. Excessive tape or postage, reused boxes, presence of odor, crappy handwritten addresses, use of a common receiver alias such as “John Smith” and even spelling errors are bad stealth.
Source: Nautilus

Is it pointless to ban autonomous killing machines?

The authors do have a point:

Suppose the UN were to implement a preventive ban on the further development of all autonomous weapons technology. Further suppose – quite optimistically, already – that all armies around the world were to respect the ban, and abort their autonomous-weapons research programmes. Even with both of these assumptions in place, we would still have to worry about autonomous weapons. A self-driving car can be easily re-programmed into an autonomous weapons system: instead of instructing it to swerve when it sees a pedestrian, just teach it to run over the pedestrian.
Source: Aeon

Your brain is not a computer

I finally got around to reading this article after it was shared in so many places I frequent over the last couple of days. As someone who has studied Philosophy, History, and Education, I find it well-written but unremarkable. Surely we all know this… right?

Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.
Source: Aeon

Edward Snowden wants to help you use your Android smartphone to protect yourself

Since 2013, Edward Snowden has been advising people and creating software. The Haven app he’s been working on  l interesting, and given I’ve got a spare Android smartphone, I might try it in my home office!

Designed to be installed on a cheap Android burner, Haven uses the phone's cameras, microphones and even accelerometers to monitor for any motion, sound or disturbance of the phone. Leave the app running in your hotel room, for instance, and it can capture photos and audio of anyone entering the room while you're out, whether an innocent housekeeper or an intelligence agent trying to use his alone time with your laptop to install spyware on it. It can then instantly send pictures and sound clips of those visitors to your primary phone, alerting you to the disturbance. The app even uses the phone's light sensor to trigger an alert if the room goes dark, or an unexpected flashlight flickers.
Source: WIRED

Update: more details in an article at The Intercept

High-performing schools in England less accessible since 2010

Same old Tories, defunding education and entrenching privilege:

Access to high performing schools in England has become more geographically unequal over the period 2010-2015. This is in spite of government policies aimed at improving school performance outside higher performing areas such as London. Virtually all local authorities with consistently low densities of high performing school places are in the North, particularly the North East and Yorkshire and the Humber. In Blackpool and Hartlepool local authorities there are no high performing secondary school places.
Source: Education Policy Institute

Silicon Valley looking to skills from the Humanities

Cathy Davidson writing about the subjects that teach the kinds of skills that employers are really looking for:

Google’s studies concur with others trying to understand the secret of a great future employee. A recent survey of 260 employers by the nonprofit National Association of Colleges and Employers, which includes both small firms and behemoths like Chevron and IBM, also ranks communication skills in the top three most-sought after qualities by job recruiters. They prize both an ability to communicate with one’s workers and an aptitude for conveying the company’s product and mission outside the organization. Or take billionaire venture capitalist and “Shark Tank” TV personality Mark Cuban: He looks for philosophy majors when he’s investing in sharks most likely to succeed.
Source: The Washington Post

Problems with reputation in the gig economy

The solution to the problems we see with platform capitalism is, of course, platform co-operativism, also known as allowing these workers to own the businesses for which they work.

Many of these platforms don’t let workers have any control over their reputations. I don’t want to sugarcoat the problems of reputation for workers with traditional jobs, but in some ways reputation is much more punishing for platform workers. There have been many stories about Airbnb, Uber, and others removing workers from their platforms, with little to no notice or ability to correct problems. In fact, Uber drivers are required to maintain a certain rating in order to stay on the platform—a fact that few passengers know. Workers in most cases lack the ability to challenge the stain on their reputations, and sometimes they don’t even know why their reputations might have suffered. Platforms are highly dependent on customer ratings for policing the quality of their workforce, but they haven’t figured out how to correct for those same customers’ race and gender bias. It can feel to the worker like it’s “one strike and you’re out”—and that arbitrariness just adds to the instability of gig-work. In addition, reputation isn’t portable. If Uber drivers want to change platforms and start delivering packages for Instacart, they have to start from scratch to build up a good reputation on the new site—even though they are using skills that are valuable to both sites.

Source: WIRED

Digital literacies and 'proximal depravity'

Martin Weller on how algorithms feeding on engagement draw us towards ever more radical stuff online:

There are implications for this. For the individual I worry about our collective mental health, to be angry, to be made to engage with this stuff, to be scared and to feel that it is more prevalent than maybe it really is. For society it normalises these views, desensitises us to them and also raises the emotional temperature of any discussion. One way of viewing digital literacy is reestablishing the protective layer, learning the signals and techniques that we have in the analogue world for the digital one. And perhaps the first step in that is in recognising how that layer has been diminished by algorithms.
Source: The zone of proximal depravity

How 'flu kills people

Nasty:

After entering someone's body—usually via the eyes, nose or mouth—the influenza virus begins hijacking human cells in the nose and throat to make copies of itself. The overwhelming viral hoard triggers a strong response from the immune system, which sends battalions of white blood cells, antibodies and inflammatory molecules to eliminate the threat. T cells attack and destroy tissue harboring the virus, particularly in the respiratory tract and lungs where the virus tends to take hold. In most healthy adults this process works, and they recover within days or weeks. But sometimes the immune system's reaction is too strong, destroying so much tissue in the lungs that they can no longer deliver enough oxygen to the blood, resulting in hypoxia and death.

Source: Scientific American